00:00:00.000 Started by upstream project "autotest-nightly" build number 4357 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3720 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.152 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.153 The recommended git tool is: git 00:00:00.153 using credential 00000000-0000-0000-0000-000000000002 00:00:00.155 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.215 Fetching changes from the remote Git repository 00:00:00.217 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.254 Using shallow fetch with depth 1 00:00:00.254 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.254 > git --version # timeout=10 00:00:00.290 > git --version # 'git version 2.39.2' 00:00:00.290 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.319 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.319 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.187 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.198 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.208 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.208 > git config core.sparsecheckout # timeout=10 00:00:08.220 > git read-tree -mu HEAD # timeout=10 00:00:08.234 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.250 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.250 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.371 [Pipeline] Start of Pipeline 00:00:08.384 [Pipeline] library 00:00:08.386 Loading library shm_lib@master 00:00:08.386 Library shm_lib@master is cached. Copying from home. 00:00:08.404 [Pipeline] node 00:00:08.436 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.438 [Pipeline] { 00:00:08.446 [Pipeline] catchError 00:00:08.447 [Pipeline] { 00:00:08.457 [Pipeline] wrap 00:00:08.463 [Pipeline] { 00:00:08.469 [Pipeline] stage 00:00:08.470 [Pipeline] { (Prologue) 00:00:08.483 [Pipeline] echo 00:00:08.484 Node: VM-host-SM9 00:00:08.489 [Pipeline] cleanWs 00:00:08.496 [WS-CLEANUP] Deleting project workspace... 00:00:08.496 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.501 [WS-CLEANUP] done 00:00:08.814 [Pipeline] setCustomBuildProperty 00:00:08.914 [Pipeline] httpRequest 00:00:09.712 [Pipeline] echo 00:00:09.713 Sorcerer 10.211.164.20 is alive 00:00:09.724 [Pipeline] retry 00:00:09.727 [Pipeline] { 00:00:09.742 [Pipeline] httpRequest 00:00:09.746 HttpMethod: GET 00:00:09.747 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.748 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.770 Response Code: HTTP/1.1 200 OK 00:00:09.770 Success: Status code 200 is in the accepted range: 200,404 00:00:09.771 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.991 [Pipeline] } 00:00:17.009 [Pipeline] // retry 00:00:17.016 [Pipeline] sh 00:00:17.295 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.310 [Pipeline] httpRequest 00:00:17.729 [Pipeline] echo 00:00:17.731 Sorcerer 10.211.164.20 is alive 00:00:17.741 [Pipeline] retry 00:00:17.743 [Pipeline] { 00:00:17.756 [Pipeline] httpRequest 00:00:17.760 HttpMethod: GET 00:00:17.761 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:17.761 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:17.769 Response Code: HTTP/1.1 200 OK 00:00:17.769 Success: Status code 200 is in the accepted range: 200,404 00:00:17.770 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:31.948 [Pipeline] } 00:01:31.966 [Pipeline] // retry 00:01:31.973 [Pipeline] sh 00:01:32.254 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:35.627 [Pipeline] sh 00:01:35.908 + git -C spdk log --oneline -n5 00:01:35.908 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:35.908 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:35.908 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:35.908 66289a6db build: use VERSION file for storing version 00:01:35.908 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:35.926 [Pipeline] writeFile 00:01:35.940 [Pipeline] sh 00:01:36.223 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:36.235 [Pipeline] sh 00:01:36.516 + cat autorun-spdk.conf 00:01:36.516 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.516 SPDK_TEST_NVMF=1 00:01:36.516 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.516 SPDK_TEST_URING=1 00:01:36.516 SPDK_TEST_VFIOUSER=1 00:01:36.516 SPDK_TEST_USDT=1 00:01:36.516 SPDK_RUN_ASAN=1 00:01:36.516 SPDK_RUN_UBSAN=1 00:01:36.516 NET_TYPE=virt 00:01:36.516 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.524 RUN_NIGHTLY=1 00:01:36.526 [Pipeline] } 00:01:36.540 [Pipeline] // stage 00:01:36.554 [Pipeline] stage 00:01:36.556 [Pipeline] { (Run VM) 00:01:36.568 [Pipeline] sh 00:01:36.850 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:36.850 + echo 'Start stage prepare_nvme.sh' 00:01:36.850 Start stage prepare_nvme.sh 00:01:36.850 + [[ -n 1 ]] 00:01:36.850 + disk_prefix=ex1 00:01:36.850 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:36.850 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:36.850 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:36.850 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.850 ++ SPDK_TEST_NVMF=1 00:01:36.850 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.850 ++ SPDK_TEST_URING=1 00:01:36.850 ++ SPDK_TEST_VFIOUSER=1 00:01:36.850 ++ SPDK_TEST_USDT=1 00:01:36.850 ++ SPDK_RUN_ASAN=1 00:01:36.850 ++ SPDK_RUN_UBSAN=1 00:01:36.850 ++ NET_TYPE=virt 00:01:36.850 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.850 ++ RUN_NIGHTLY=1 00:01:36.850 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:36.850 + nvme_files=() 00:01:36.850 + declare -A nvme_files 00:01:36.850 + backend_dir=/var/lib/libvirt/images/backends 00:01:36.850 + nvme_files['nvme.img']=5G 00:01:36.850 + nvme_files['nvme-cmb.img']=5G 00:01:36.850 + nvme_files['nvme-multi0.img']=4G 00:01:36.850 + nvme_files['nvme-multi1.img']=4G 00:01:36.850 + nvme_files['nvme-multi2.img']=4G 00:01:36.850 + nvme_files['nvme-openstack.img']=8G 00:01:36.850 + nvme_files['nvme-zns.img']=5G 00:01:36.850 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:36.850 + (( SPDK_TEST_FTL == 1 )) 00:01:36.850 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:36.850 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:36.850 + for nvme in "${!nvme_files[@]}" 00:01:36.850 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:36.850 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.850 + for nvme in "${!nvme_files[@]}" 00:01:36.850 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:36.850 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.850 + for nvme in "${!nvme_files[@]}" 00:01:36.850 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:37.109 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:37.109 + for nvme in "${!nvme_files[@]}" 00:01:37.109 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:37.109 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.109 + for nvme in "${!nvme_files[@]}" 00:01:37.109 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:37.109 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.109 + for nvme in "${!nvme_files[@]}" 00:01:37.109 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:37.367 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.367 + for nvme in "${!nvme_files[@]}" 00:01:37.367 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:37.367 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.367 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:37.625 + echo 'End stage prepare_nvme.sh' 00:01:37.625 End stage prepare_nvme.sh 00:01:37.636 [Pipeline] sh 00:01:37.914 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:37.914 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:37.914 00:01:37.914 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:37.914 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:37.914 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:37.914 HELP=0 00:01:37.914 DRY_RUN=0 00:01:37.914 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:37.914 NVME_DISKS_TYPE=nvme,nvme, 00:01:37.914 NVME_AUTO_CREATE=0 00:01:37.914 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:37.914 NVME_CMB=,, 00:01:37.914 NVME_PMR=,, 00:01:37.914 NVME_ZNS=,, 00:01:37.914 NVME_MS=,, 00:01:37.914 NVME_FDP=,, 00:01:37.914 SPDK_VAGRANT_DISTRO=fedora39 00:01:37.914 SPDK_VAGRANT_VMCPU=10 00:01:37.914 SPDK_VAGRANT_VMRAM=12288 00:01:37.914 SPDK_VAGRANT_PROVIDER=libvirt 00:01:37.914 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:37.914 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:37.914 SPDK_OPENSTACK_NETWORK=0 00:01:37.914 VAGRANT_PACKAGE_BOX=0 00:01:37.914 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:37.914 FORCE_DISTRO=true 00:01:37.914 VAGRANT_BOX_VERSION= 00:01:37.914 EXTRA_VAGRANTFILES= 00:01:37.914 NIC_MODEL=e1000 00:01:37.914 00:01:37.914 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:37.914 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:41.200 Bringing machine 'default' up with 'libvirt' provider... 00:01:41.457 ==> default: Creating image (snapshot of base box volume). 00:01:41.714 ==> default: Creating domain with the following settings... 00:01:41.714 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734080615_f71f53300696b6d82db0 00:01:41.714 ==> default: -- Domain type: kvm 00:01:41.715 ==> default: -- Cpus: 10 00:01:41.715 ==> default: -- Feature: acpi 00:01:41.715 ==> default: -- Feature: apic 00:01:41.715 ==> default: -- Feature: pae 00:01:41.715 ==> default: -- Memory: 12288M 00:01:41.715 ==> default: -- Memory Backing: hugepages: 00:01:41.715 ==> default: -- Management MAC: 00:01:41.715 ==> default: -- Loader: 00:01:41.715 ==> default: -- Nvram: 00:01:41.715 ==> default: -- Base box: spdk/fedora39 00:01:41.715 ==> default: -- Storage pool: default 00:01:41.715 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734080615_f71f53300696b6d82db0.img (20G) 00:01:41.715 ==> default: -- Volume Cache: default 00:01:41.715 ==> default: -- Kernel: 00:01:41.715 ==> default: -- Initrd: 00:01:41.715 ==> default: -- Graphics Type: vnc 00:01:41.715 ==> default: -- Graphics Port: -1 00:01:41.715 ==> default: -- Graphics IP: 127.0.0.1 00:01:41.715 ==> default: -- Graphics Password: Not defined 00:01:41.715 ==> default: -- Video Type: cirrus 00:01:41.715 ==> default: -- Video VRAM: 9216 00:01:41.715 ==> default: -- Sound Type: 00:01:41.715 ==> default: -- Keymap: en-us 00:01:41.715 ==> default: -- TPM Path: 00:01:41.715 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:41.715 ==> default: -- Command line args: 00:01:41.715 ==> default: -> value=-device, 00:01:41.715 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:41.715 ==> default: -> value=-drive, 00:01:41.715 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:41.715 ==> default: -> value=-device, 00:01:41.715 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.715 ==> default: -> value=-device, 00:01:41.715 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:41.715 ==> default: -> value=-drive, 00:01:41.715 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:41.715 ==> default: -> value=-device, 00:01:41.715 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.715 ==> default: -> value=-drive, 00:01:41.715 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:41.715 ==> default: -> value=-device, 00:01:41.715 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.715 ==> default: -> value=-drive, 00:01:41.715 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:41.715 ==> default: -> value=-device, 00:01:41.715 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.715 ==> default: Creating shared folders metadata... 00:01:41.715 ==> default: Starting domain. 00:01:43.093 ==> default: Waiting for domain to get an IP address... 00:02:01.176 ==> default: Waiting for SSH to become available... 00:02:01.176 ==> default: Configuring and enabling network interfaces... 00:02:03.706 default: SSH address: 192.168.121.29:22 00:02:03.706 default: SSH username: vagrant 00:02:03.706 default: SSH auth method: private key 00:02:05.611 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:13.731 ==> default: Mounting SSHFS shared folder... 00:02:15.108 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:15.108 ==> default: Checking Mount.. 00:02:16.044 ==> default: Folder Successfully Mounted! 00:02:16.044 ==> default: Running provisioner: file... 00:02:16.980 default: ~/.gitconfig => .gitconfig 00:02:17.547 00:02:17.547 SUCCESS! 00:02:17.547 00:02:17.547 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:17.547 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:17.547 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:17.547 00:02:17.555 [Pipeline] } 00:02:17.570 [Pipeline] // stage 00:02:17.581 [Pipeline] dir 00:02:17.582 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:17.584 [Pipeline] { 00:02:17.597 [Pipeline] catchError 00:02:17.599 [Pipeline] { 00:02:17.612 [Pipeline] sh 00:02:17.891 + vagrant ssh-config --host vagrant 00:02:17.891 + sed -ne /^Host/,$p 00:02:17.891 + tee ssh_conf 00:02:21.177 Host vagrant 00:02:21.177 HostName 192.168.121.29 00:02:21.177 User vagrant 00:02:21.177 Port 22 00:02:21.177 UserKnownHostsFile /dev/null 00:02:21.177 StrictHostKeyChecking no 00:02:21.177 PasswordAuthentication no 00:02:21.177 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:21.177 IdentitiesOnly yes 00:02:21.177 LogLevel FATAL 00:02:21.177 ForwardAgent yes 00:02:21.177 ForwardX11 yes 00:02:21.177 00:02:21.192 [Pipeline] withEnv 00:02:21.195 [Pipeline] { 00:02:21.210 [Pipeline] sh 00:02:21.490 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:21.490 source /etc/os-release 00:02:21.490 [[ -e /image.version ]] && img=$(< /image.version) 00:02:21.490 # Minimal, systemd-like check. 00:02:21.490 if [[ -e /.dockerenv ]]; then 00:02:21.490 # Clear garbage from the node's name: 00:02:21.490 # agt-er_autotest_547-896 -> autotest_547-896 00:02:21.490 # $HOSTNAME is the actual container id 00:02:21.490 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:21.490 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:21.490 # We can assume this is a mount from a host where container is running, 00:02:21.490 # so fetch its hostname to easily identify the target swarm worker. 00:02:21.490 container="$(< /etc/hostname) ($agent)" 00:02:21.490 else 00:02:21.490 # Fallback 00:02:21.490 container=$agent 00:02:21.490 fi 00:02:21.490 fi 00:02:21.490 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:21.490 00:02:21.759 [Pipeline] } 00:02:21.777 [Pipeline] // withEnv 00:02:21.784 [Pipeline] setCustomBuildProperty 00:02:21.798 [Pipeline] stage 00:02:21.800 [Pipeline] { (Tests) 00:02:21.817 [Pipeline] sh 00:02:22.095 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:22.365 [Pipeline] sh 00:02:22.638 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:23.190 [Pipeline] timeout 00:02:23.191 Timeout set to expire in 1 hr 0 min 00:02:23.193 [Pipeline] { 00:02:23.207 [Pipeline] sh 00:02:23.524 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:24.090 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:24.101 [Pipeline] sh 00:02:24.380 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:24.651 [Pipeline] sh 00:02:24.930 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:25.203 [Pipeline] sh 00:02:25.483 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:25.742 ++ readlink -f spdk_repo 00:02:25.742 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:25.742 + [[ -n /home/vagrant/spdk_repo ]] 00:02:25.742 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:25.742 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:25.742 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:25.742 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:25.742 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:25.742 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:25.742 + cd /home/vagrant/spdk_repo 00:02:25.742 + source /etc/os-release 00:02:25.742 ++ NAME='Fedora Linux' 00:02:25.742 ++ VERSION='39 (Cloud Edition)' 00:02:25.742 ++ ID=fedora 00:02:25.742 ++ VERSION_ID=39 00:02:25.742 ++ VERSION_CODENAME= 00:02:25.742 ++ PLATFORM_ID=platform:f39 00:02:25.742 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:25.742 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.742 ++ LOGO=fedora-logo-icon 00:02:25.742 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:25.742 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.742 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:25.742 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.742 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.742 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.742 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:25.742 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.742 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:25.742 ++ SUPPORT_END=2024-11-12 00:02:25.742 ++ VARIANT='Cloud Edition' 00:02:25.742 ++ VARIANT_ID=cloud 00:02:25.742 + uname -a 00:02:25.742 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:25.742 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:26.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:26.001 Hugepages 00:02:26.001 node hugesize free / total 00:02:26.001 node0 1048576kB 0 / 0 00:02:26.260 node0 2048kB 0 / 0 00:02:26.260 00:02:26.260 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.260 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:26.260 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:26.260 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:26.260 + rm -f /tmp/spdk-ld-path 00:02:26.260 + source autorun-spdk.conf 00:02:26.260 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.260 ++ SPDK_TEST_NVMF=1 00:02:26.260 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.260 ++ SPDK_TEST_URING=1 00:02:26.260 ++ SPDK_TEST_VFIOUSER=1 00:02:26.260 ++ SPDK_TEST_USDT=1 00:02:26.260 ++ SPDK_RUN_ASAN=1 00:02:26.260 ++ SPDK_RUN_UBSAN=1 00:02:26.260 ++ NET_TYPE=virt 00:02:26.260 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.260 ++ RUN_NIGHTLY=1 00:02:26.260 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:26.260 + [[ -n '' ]] 00:02:26.260 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:26.260 + for M in /var/spdk/build-*-manifest.txt 00:02:26.260 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:26.260 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.260 + for M in /var/spdk/build-*-manifest.txt 00:02:26.260 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:26.260 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.260 + for M in /var/spdk/build-*-manifest.txt 00:02:26.260 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:26.260 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:26.260 ++ uname 00:02:26.260 + [[ Linux == \L\i\n\u\x ]] 00:02:26.260 + sudo dmesg -T 00:02:26.260 + sudo dmesg --clear 00:02:26.260 + dmesg_pid=5248 00:02:26.260 + sudo dmesg -Tw 00:02:26.260 + [[ Fedora Linux == FreeBSD ]] 00:02:26.260 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.260 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.260 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:26.261 + [[ -x /usr/src/fio-static/fio ]] 00:02:26.261 + export FIO_BIN=/usr/src/fio-static/fio 00:02:26.261 + FIO_BIN=/usr/src/fio-static/fio 00:02:26.261 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:26.261 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:26.261 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:26.261 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.261 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.261 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:26.261 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.261 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.261 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:26.520 09:04:20 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:26.520 09:04:20 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_ASAN=1 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_RUN_UBSAN=1 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@9 -- $ NET_TYPE=virt 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.520 09:04:20 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:26.520 09:04:20 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:26.520 09:04:20 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:26.520 09:04:20 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:26.520 09:04:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:26.520 09:04:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:26.520 09:04:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:26.520 09:04:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.520 09:04:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.520 09:04:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.520 09:04:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.520 09:04:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.520 09:04:20 -- paths/export.sh@5 -- $ export PATH 00:02:26.520 09:04:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.520 09:04:20 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:26.520 09:04:20 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:26.520 09:04:20 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734080660.XXXXXX 00:02:26.520 09:04:20 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734080660.8IkcQZ 00:02:26.520 09:04:20 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:26.520 09:04:20 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:26.520 09:04:20 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:26.520 09:04:20 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:26.520 09:04:20 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:26.520 09:04:20 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:26.520 09:04:20 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:26.520 09:04:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.520 09:04:20 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:26.520 09:04:20 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:26.520 09:04:20 -- pm/common@17 -- $ local monitor 00:02:26.520 09:04:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.520 09:04:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.520 09:04:20 -- pm/common@25 -- $ sleep 1 00:02:26.520 09:04:20 -- pm/common@21 -- $ date +%s 00:02:26.520 09:04:20 -- pm/common@21 -- $ date +%s 00:02:26.520 09:04:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734080660 00:02:26.520 09:04:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734080660 00:02:26.520 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734080660_collect-cpu-load.pm.log 00:02:26.520 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734080660_collect-vmstat.pm.log 00:02:27.457 09:04:21 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:27.457 09:04:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:27.457 09:04:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:27.457 09:04:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:27.457 09:04:21 -- spdk/autobuild.sh@16 -- $ date -u 00:02:27.457 Fri Dec 13 09:04:21 AM UTC 2024 00:02:27.457 09:04:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:27.457 v25.01-rc1-2-ge01cb43b8 00:02:27.457 09:04:21 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:27.457 09:04:21 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:27.457 09:04:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:27.457 09:04:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:27.457 09:04:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.457 ************************************ 00:02:27.457 START TEST asan 00:02:27.457 ************************************ 00:02:27.457 using asan 00:02:27.457 09:04:21 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:27.457 00:02:27.457 real 0m0.000s 00:02:27.457 user 0m0.000s 00:02:27.457 sys 0m0.000s 00:02:27.457 09:04:21 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:27.457 09:04:21 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.457 ************************************ 00:02:27.457 END TEST asan 00:02:27.457 ************************************ 00:02:27.457 09:04:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:27.457 09:04:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:27.457 09:04:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:27.457 09:04:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:27.457 09:04:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.457 ************************************ 00:02:27.457 START TEST ubsan 00:02:27.457 ************************************ 00:02:27.457 using ubsan 00:02:27.457 09:04:21 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:27.457 00:02:27.457 real 0m0.000s 00:02:27.457 user 0m0.000s 00:02:27.457 sys 0m0.000s 00:02:27.457 09:04:21 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:27.457 ************************************ 00:02:27.457 END TEST ubsan 00:02:27.457 09:04:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.457 ************************************ 00:02:27.715 09:04:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:27.715 09:04:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:27.715 09:04:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:27.715 09:04:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:27.715 09:04:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:27.715 09:04:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:27.715 09:04:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:27.715 09:04:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:27.716 09:04:21 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:27.974 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:27.974 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:28.233 Using 'verbs' RDMA provider 00:02:41.841 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:56.740 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:56.740 Creating mk/config.mk...done. 00:02:56.740 Creating mk/cc.flags.mk...done. 00:02:56.740 Type 'make' to build. 00:02:56.740 09:04:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:56.740 09:04:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:56.740 09:04:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:56.740 09:04:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.740 ************************************ 00:02:56.740 START TEST make 00:02:56.740 ************************************ 00:02:56.740 09:04:48 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:56.998 The Meson build system 00:02:56.998 Version: 1.5.0 00:02:56.998 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:56.998 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:56.998 Build type: native build 00:02:56.998 Project name: libvfio-user 00:02:56.998 Project version: 0.0.1 00:02:56.998 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:56.998 C linker for the host machine: cc ld.bfd 2.40-14 00:02:56.998 Host machine cpu family: x86_64 00:02:56.998 Host machine cpu: x86_64 00:02:56.998 Run-time dependency threads found: YES 00:02:56.998 Library dl found: YES 00:02:56.998 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:56.998 Run-time dependency json-c found: YES 0.17 00:02:56.998 Run-time dependency cmocka found: YES 1.1.7 00:02:56.998 Program pytest-3 found: NO 00:02:56.998 Program flake8 found: NO 00:02:56.998 Program misspell-fixer found: NO 00:02:56.998 Program restructuredtext-lint found: NO 00:02:56.998 Program valgrind found: YES (/usr/bin/valgrind) 00:02:56.998 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:56.998 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:56.998 Compiler for C supports arguments -Wwrite-strings: YES 00:02:56.998 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:56.998 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:56.998 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:56.998 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:56.998 Build targets in project: 8 00:02:56.998 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:56.998 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:56.998 00:02:56.998 libvfio-user 0.0.1 00:02:56.998 00:02:56.998 User defined options 00:02:56.998 buildtype : debug 00:02:56.998 default_library: shared 00:02:56.998 libdir : /usr/local/lib 00:02:56.998 00:02:56.998 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:57.565 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:57.565 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:57.823 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:57.823 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:57.823 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:57.823 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:57.823 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:57.823 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:57.823 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:57.823 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:57.823 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:57.824 [11/37] Compiling C object samples/null.p/null.c.o 00:02:57.824 [12/37] Compiling C object samples/client.p/client.c.o 00:02:57.824 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:57.824 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:57.824 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:57.824 [16/37] Compiling C object samples/server.p/server.c.o 00:02:57.824 [17/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:57.824 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:57.824 [19/37] Linking target samples/client 00:02:58.082 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:58.082 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:58.082 [22/37] Linking target lib/libvfio-user.so.0.0.1 00:02:58.082 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:58.082 [24/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:58.082 [25/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:58.082 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:58.082 [27/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:58.082 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:58.082 [29/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:58.082 [30/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:58.082 [31/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:58.082 [32/37] Linking target samples/server 00:02:58.339 [33/37] Linking target samples/null 00:02:58.340 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:58.340 [35/37] Linking target samples/gpio-pci-idio-16 00:02:58.340 [36/37] Linking target samples/lspci 00:02:58.340 [37/37] Linking target test/unit_tests 00:02:58.340 INFO: autodetecting backend as ninja 00:02:58.340 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:58.340 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:58.906 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:58.906 ninja: no work to do. 00:03:08.876 The Meson build system 00:03:08.876 Version: 1.5.0 00:03:08.876 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:08.876 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:08.876 Build type: native build 00:03:08.876 Program cat found: YES (/usr/bin/cat) 00:03:08.876 Project name: DPDK 00:03:08.876 Project version: 24.03.0 00:03:08.876 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:08.876 C linker for the host machine: cc ld.bfd 2.40-14 00:03:08.876 Host machine cpu family: x86_64 00:03:08.876 Host machine cpu: x86_64 00:03:08.876 Message: ## Building in Developer Mode ## 00:03:08.876 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:08.876 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:08.876 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:08.876 Program python3 found: YES (/usr/bin/python3) 00:03:08.876 Program cat found: YES (/usr/bin/cat) 00:03:08.876 Compiler for C supports arguments -march=native: YES 00:03:08.876 Checking for size of "void *" : 8 00:03:08.876 Checking for size of "void *" : 8 (cached) 00:03:08.876 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:08.876 Library m found: YES 00:03:08.876 Library numa found: YES 00:03:08.876 Has header "numaif.h" : YES 00:03:08.876 Library fdt found: NO 00:03:08.876 Library execinfo found: NO 00:03:08.876 Has header "execinfo.h" : YES 00:03:08.876 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:08.876 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:08.876 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:08.876 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:08.876 Run-time dependency openssl found: YES 3.1.1 00:03:08.876 Run-time dependency libpcap found: YES 1.10.4 00:03:08.876 Has header "pcap.h" with dependency libpcap: YES 00:03:08.876 Compiler for C supports arguments -Wcast-qual: YES 00:03:08.876 Compiler for C supports arguments -Wdeprecated: YES 00:03:08.876 Compiler for C supports arguments -Wformat: YES 00:03:08.876 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:08.876 Compiler for C supports arguments -Wformat-security: NO 00:03:08.876 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:08.876 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:08.876 Compiler for C supports arguments -Wnested-externs: YES 00:03:08.876 Compiler for C supports arguments -Wold-style-definition: YES 00:03:08.876 Compiler for C supports arguments -Wpointer-arith: YES 00:03:08.876 Compiler for C supports arguments -Wsign-compare: YES 00:03:08.876 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:08.876 Compiler for C supports arguments -Wundef: YES 00:03:08.876 Compiler for C supports arguments -Wwrite-strings: YES 00:03:08.876 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:08.876 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:08.876 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:08.876 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:08.876 Program objdump found: YES (/usr/bin/objdump) 00:03:08.876 Compiler for C supports arguments -mavx512f: YES 00:03:08.876 Checking if "AVX512 checking" compiles: YES 00:03:08.876 Fetching value of define "__SSE4_2__" : 1 00:03:08.876 Fetching value of define "__AES__" : 1 00:03:08.876 Fetching value of define "__AVX__" : 1 00:03:08.876 Fetching value of define "__AVX2__" : 1 00:03:08.876 Fetching value of define "__AVX512BW__" : (undefined) 00:03:08.876 Fetching value of define "__AVX512CD__" : (undefined) 00:03:08.876 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:08.876 Fetching value of define "__AVX512F__" : (undefined) 00:03:08.876 Fetching value of define "__AVX512VL__" : (undefined) 00:03:08.877 Fetching value of define "__PCLMUL__" : 1 00:03:08.877 Fetching value of define "__RDRND__" : 1 00:03:08.877 Fetching value of define "__RDSEED__" : 1 00:03:08.877 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:08.877 Fetching value of define "__znver1__" : (undefined) 00:03:08.877 Fetching value of define "__znver2__" : (undefined) 00:03:08.877 Fetching value of define "__znver3__" : (undefined) 00:03:08.877 Fetching value of define "__znver4__" : (undefined) 00:03:08.877 Library asan found: YES 00:03:08.877 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:08.877 Message: lib/log: Defining dependency "log" 00:03:08.877 Message: lib/kvargs: Defining dependency "kvargs" 00:03:08.877 Message: lib/telemetry: Defining dependency "telemetry" 00:03:08.877 Library rt found: YES 00:03:08.877 Checking for function "getentropy" : NO 00:03:08.877 Message: lib/eal: Defining dependency "eal" 00:03:08.877 Message: lib/ring: Defining dependency "ring" 00:03:08.877 Message: lib/rcu: Defining dependency "rcu" 00:03:08.877 Message: lib/mempool: Defining dependency "mempool" 00:03:08.877 Message: lib/mbuf: Defining dependency "mbuf" 00:03:08.877 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:08.877 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:08.877 Compiler for C supports arguments -mpclmul: YES 00:03:08.877 Compiler for C supports arguments -maes: YES 00:03:08.877 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:08.877 Compiler for C supports arguments -mavx512bw: YES 00:03:08.877 Compiler for C supports arguments -mavx512dq: YES 00:03:08.877 Compiler for C supports arguments -mavx512vl: YES 00:03:08.877 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:08.877 Compiler for C supports arguments -mavx2: YES 00:03:08.877 Compiler for C supports arguments -mavx: YES 00:03:08.877 Message: lib/net: Defining dependency "net" 00:03:08.877 Message: lib/meter: Defining dependency "meter" 00:03:08.877 Message: lib/ethdev: Defining dependency "ethdev" 00:03:08.877 Message: lib/pci: Defining dependency "pci" 00:03:08.877 Message: lib/cmdline: Defining dependency "cmdline" 00:03:08.877 Message: lib/hash: Defining dependency "hash" 00:03:08.877 Message: lib/timer: Defining dependency "timer" 00:03:08.877 Message: lib/compressdev: Defining dependency "compressdev" 00:03:08.877 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:08.877 Message: lib/dmadev: Defining dependency "dmadev" 00:03:08.877 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:08.877 Message: lib/power: Defining dependency "power" 00:03:08.877 Message: lib/reorder: Defining dependency "reorder" 00:03:08.877 Message: lib/security: Defining dependency "security" 00:03:08.877 Has header "linux/userfaultfd.h" : YES 00:03:08.877 Has header "linux/vduse.h" : YES 00:03:08.877 Message: lib/vhost: Defining dependency "vhost" 00:03:08.877 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:08.877 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:08.877 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:08.877 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:08.877 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:08.877 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:08.877 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:08.877 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:08.877 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:08.877 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:08.877 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:08.877 Configuring doxy-api-html.conf using configuration 00:03:08.877 Configuring doxy-api-man.conf using configuration 00:03:08.877 Program mandb found: YES (/usr/bin/mandb) 00:03:08.877 Program sphinx-build found: NO 00:03:08.877 Configuring rte_build_config.h using configuration 00:03:08.877 Message: 00:03:08.877 ================= 00:03:08.877 Applications Enabled 00:03:08.877 ================= 00:03:08.877 00:03:08.877 apps: 00:03:08.877 00:03:08.877 00:03:08.877 Message: 00:03:08.877 ================= 00:03:08.877 Libraries Enabled 00:03:08.877 ================= 00:03:08.877 00:03:08.877 libs: 00:03:08.877 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:08.877 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:08.877 cryptodev, dmadev, power, reorder, security, vhost, 00:03:08.877 00:03:08.877 Message: 00:03:08.877 =============== 00:03:08.877 Drivers Enabled 00:03:08.877 =============== 00:03:08.877 00:03:08.877 common: 00:03:08.877 00:03:08.877 bus: 00:03:08.877 pci, vdev, 00:03:08.877 mempool: 00:03:08.877 ring, 00:03:08.877 dma: 00:03:08.877 00:03:08.877 net: 00:03:08.877 00:03:08.877 crypto: 00:03:08.877 00:03:08.877 compress: 00:03:08.877 00:03:08.877 vdpa: 00:03:08.877 00:03:08.877 00:03:08.877 Message: 00:03:08.877 ================= 00:03:08.877 Content Skipped 00:03:08.877 ================= 00:03:08.877 00:03:08.877 apps: 00:03:08.877 dumpcap: explicitly disabled via build config 00:03:08.877 graph: explicitly disabled via build config 00:03:08.877 pdump: explicitly disabled via build config 00:03:08.877 proc-info: explicitly disabled via build config 00:03:08.877 test-acl: explicitly disabled via build config 00:03:08.877 test-bbdev: explicitly disabled via build config 00:03:08.877 test-cmdline: explicitly disabled via build config 00:03:08.877 test-compress-perf: explicitly disabled via build config 00:03:08.877 test-crypto-perf: explicitly disabled via build config 00:03:08.877 test-dma-perf: explicitly disabled via build config 00:03:08.877 test-eventdev: explicitly disabled via build config 00:03:08.877 test-fib: explicitly disabled via build config 00:03:08.877 test-flow-perf: explicitly disabled via build config 00:03:08.877 test-gpudev: explicitly disabled via build config 00:03:08.877 test-mldev: explicitly disabled via build config 00:03:08.877 test-pipeline: explicitly disabled via build config 00:03:08.877 test-pmd: explicitly disabled via build config 00:03:08.877 test-regex: explicitly disabled via build config 00:03:08.877 test-sad: explicitly disabled via build config 00:03:08.877 test-security-perf: explicitly disabled via build config 00:03:08.877 00:03:08.877 libs: 00:03:08.877 argparse: explicitly disabled via build config 00:03:08.877 metrics: explicitly disabled via build config 00:03:08.877 acl: explicitly disabled via build config 00:03:08.877 bbdev: explicitly disabled via build config 00:03:08.877 bitratestats: explicitly disabled via build config 00:03:08.877 bpf: explicitly disabled via build config 00:03:08.877 cfgfile: explicitly disabled via build config 00:03:08.877 distributor: explicitly disabled via build config 00:03:08.877 efd: explicitly disabled via build config 00:03:08.877 eventdev: explicitly disabled via build config 00:03:08.877 dispatcher: explicitly disabled via build config 00:03:08.877 gpudev: explicitly disabled via build config 00:03:08.877 gro: explicitly disabled via build config 00:03:08.877 gso: explicitly disabled via build config 00:03:08.877 ip_frag: explicitly disabled via build config 00:03:08.877 jobstats: explicitly disabled via build config 00:03:08.877 latencystats: explicitly disabled via build config 00:03:08.877 lpm: explicitly disabled via build config 00:03:08.877 member: explicitly disabled via build config 00:03:08.877 pcapng: explicitly disabled via build config 00:03:08.877 rawdev: explicitly disabled via build config 00:03:08.877 regexdev: explicitly disabled via build config 00:03:08.877 mldev: explicitly disabled via build config 00:03:08.877 rib: explicitly disabled via build config 00:03:08.877 sched: explicitly disabled via build config 00:03:08.877 stack: explicitly disabled via build config 00:03:08.877 ipsec: explicitly disabled via build config 00:03:08.877 pdcp: explicitly disabled via build config 00:03:08.877 fib: explicitly disabled via build config 00:03:08.877 port: explicitly disabled via build config 00:03:08.877 pdump: explicitly disabled via build config 00:03:08.877 table: explicitly disabled via build config 00:03:08.877 pipeline: explicitly disabled via build config 00:03:08.877 graph: explicitly disabled via build config 00:03:08.877 node: explicitly disabled via build config 00:03:08.877 00:03:08.877 drivers: 00:03:08.877 common/cpt: not in enabled drivers build config 00:03:08.877 common/dpaax: not in enabled drivers build config 00:03:08.877 common/iavf: not in enabled drivers build config 00:03:08.877 common/idpf: not in enabled drivers build config 00:03:08.877 common/ionic: not in enabled drivers build config 00:03:08.877 common/mvep: not in enabled drivers build config 00:03:08.877 common/octeontx: not in enabled drivers build config 00:03:08.877 bus/auxiliary: not in enabled drivers build config 00:03:08.877 bus/cdx: not in enabled drivers build config 00:03:08.877 bus/dpaa: not in enabled drivers build config 00:03:08.877 bus/fslmc: not in enabled drivers build config 00:03:08.877 bus/ifpga: not in enabled drivers build config 00:03:08.877 bus/platform: not in enabled drivers build config 00:03:08.877 bus/uacce: not in enabled drivers build config 00:03:08.877 bus/vmbus: not in enabled drivers build config 00:03:08.877 common/cnxk: not in enabled drivers build config 00:03:08.877 common/mlx5: not in enabled drivers build config 00:03:08.877 common/nfp: not in enabled drivers build config 00:03:08.877 common/nitrox: not in enabled drivers build config 00:03:08.877 common/qat: not in enabled drivers build config 00:03:08.877 common/sfc_efx: not in enabled drivers build config 00:03:08.877 mempool/bucket: not in enabled drivers build config 00:03:08.877 mempool/cnxk: not in enabled drivers build config 00:03:08.877 mempool/dpaa: not in enabled drivers build config 00:03:08.877 mempool/dpaa2: not in enabled drivers build config 00:03:08.877 mempool/octeontx: not in enabled drivers build config 00:03:08.877 mempool/stack: not in enabled drivers build config 00:03:08.877 dma/cnxk: not in enabled drivers build config 00:03:08.877 dma/dpaa: not in enabled drivers build config 00:03:08.877 dma/dpaa2: not in enabled drivers build config 00:03:08.877 dma/hisilicon: not in enabled drivers build config 00:03:08.877 dma/idxd: not in enabled drivers build config 00:03:08.877 dma/ioat: not in enabled drivers build config 00:03:08.877 dma/skeleton: not in enabled drivers build config 00:03:08.877 net/af_packet: not in enabled drivers build config 00:03:08.877 net/af_xdp: not in enabled drivers build config 00:03:08.877 net/ark: not in enabled drivers build config 00:03:08.877 net/atlantic: not in enabled drivers build config 00:03:08.877 net/avp: not in enabled drivers build config 00:03:08.877 net/axgbe: not in enabled drivers build config 00:03:08.878 net/bnx2x: not in enabled drivers build config 00:03:08.878 net/bnxt: not in enabled drivers build config 00:03:08.878 net/bonding: not in enabled drivers build config 00:03:08.878 net/cnxk: not in enabled drivers build config 00:03:08.878 net/cpfl: not in enabled drivers build config 00:03:08.878 net/cxgbe: not in enabled drivers build config 00:03:08.878 net/dpaa: not in enabled drivers build config 00:03:08.878 net/dpaa2: not in enabled drivers build config 00:03:08.878 net/e1000: not in enabled drivers build config 00:03:08.878 net/ena: not in enabled drivers build config 00:03:08.878 net/enetc: not in enabled drivers build config 00:03:08.878 net/enetfec: not in enabled drivers build config 00:03:08.878 net/enic: not in enabled drivers build config 00:03:08.878 net/failsafe: not in enabled drivers build config 00:03:08.878 net/fm10k: not in enabled drivers build config 00:03:08.878 net/gve: not in enabled drivers build config 00:03:08.878 net/hinic: not in enabled drivers build config 00:03:08.878 net/hns3: not in enabled drivers build config 00:03:08.878 net/i40e: not in enabled drivers build config 00:03:08.878 net/iavf: not in enabled drivers build config 00:03:08.878 net/ice: not in enabled drivers build config 00:03:08.878 net/idpf: not in enabled drivers build config 00:03:08.878 net/igc: not in enabled drivers build config 00:03:08.878 net/ionic: not in enabled drivers build config 00:03:08.878 net/ipn3ke: not in enabled drivers build config 00:03:08.878 net/ixgbe: not in enabled drivers build config 00:03:08.878 net/mana: not in enabled drivers build config 00:03:08.878 net/memif: not in enabled drivers build config 00:03:08.878 net/mlx4: not in enabled drivers build config 00:03:08.878 net/mlx5: not in enabled drivers build config 00:03:08.878 net/mvneta: not in enabled drivers build config 00:03:08.878 net/mvpp2: not in enabled drivers build config 00:03:08.878 net/netvsc: not in enabled drivers build config 00:03:08.878 net/nfb: not in enabled drivers build config 00:03:08.878 net/nfp: not in enabled drivers build config 00:03:08.878 net/ngbe: not in enabled drivers build config 00:03:08.878 net/null: not in enabled drivers build config 00:03:08.878 net/octeontx: not in enabled drivers build config 00:03:08.878 net/octeon_ep: not in enabled drivers build config 00:03:08.878 net/pcap: not in enabled drivers build config 00:03:08.878 net/pfe: not in enabled drivers build config 00:03:08.878 net/qede: not in enabled drivers build config 00:03:08.878 net/ring: not in enabled drivers build config 00:03:08.878 net/sfc: not in enabled drivers build config 00:03:08.878 net/softnic: not in enabled drivers build config 00:03:08.878 net/tap: not in enabled drivers build config 00:03:08.878 net/thunderx: not in enabled drivers build config 00:03:08.878 net/txgbe: not in enabled drivers build config 00:03:08.878 net/vdev_netvsc: not in enabled drivers build config 00:03:08.878 net/vhost: not in enabled drivers build config 00:03:08.878 net/virtio: not in enabled drivers build config 00:03:08.878 net/vmxnet3: not in enabled drivers build config 00:03:08.878 raw/*: missing internal dependency, "rawdev" 00:03:08.878 crypto/armv8: not in enabled drivers build config 00:03:08.878 crypto/bcmfs: not in enabled drivers build config 00:03:08.878 crypto/caam_jr: not in enabled drivers build config 00:03:08.878 crypto/ccp: not in enabled drivers build config 00:03:08.878 crypto/cnxk: not in enabled drivers build config 00:03:08.878 crypto/dpaa_sec: not in enabled drivers build config 00:03:08.878 crypto/dpaa2_sec: not in enabled drivers build config 00:03:08.878 crypto/ipsec_mb: not in enabled drivers build config 00:03:08.878 crypto/mlx5: not in enabled drivers build config 00:03:08.878 crypto/mvsam: not in enabled drivers build config 00:03:08.878 crypto/nitrox: not in enabled drivers build config 00:03:08.878 crypto/null: not in enabled drivers build config 00:03:08.878 crypto/octeontx: not in enabled drivers build config 00:03:08.878 crypto/openssl: not in enabled drivers build config 00:03:08.878 crypto/scheduler: not in enabled drivers build config 00:03:08.878 crypto/uadk: not in enabled drivers build config 00:03:08.878 crypto/virtio: not in enabled drivers build config 00:03:08.878 compress/isal: not in enabled drivers build config 00:03:08.878 compress/mlx5: not in enabled drivers build config 00:03:08.878 compress/nitrox: not in enabled drivers build config 00:03:08.878 compress/octeontx: not in enabled drivers build config 00:03:08.878 compress/zlib: not in enabled drivers build config 00:03:08.878 regex/*: missing internal dependency, "regexdev" 00:03:08.878 ml/*: missing internal dependency, "mldev" 00:03:08.878 vdpa/ifc: not in enabled drivers build config 00:03:08.878 vdpa/mlx5: not in enabled drivers build config 00:03:08.878 vdpa/nfp: not in enabled drivers build config 00:03:08.878 vdpa/sfc: not in enabled drivers build config 00:03:08.878 event/*: missing internal dependency, "eventdev" 00:03:08.878 baseband/*: missing internal dependency, "bbdev" 00:03:08.878 gpu/*: missing internal dependency, "gpudev" 00:03:08.878 00:03:08.878 00:03:08.878 Build targets in project: 85 00:03:08.878 00:03:08.878 DPDK 24.03.0 00:03:08.878 00:03:08.878 User defined options 00:03:08.878 buildtype : debug 00:03:08.878 default_library : shared 00:03:08.878 libdir : lib 00:03:08.878 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:08.878 b_sanitize : address 00:03:08.878 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:08.878 c_link_args : 00:03:08.878 cpu_instruction_set: native 00:03:08.878 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:08.878 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:08.878 enable_docs : false 00:03:08.878 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:08.878 enable_kmods : false 00:03:08.878 max_lcores : 128 00:03:08.878 tests : false 00:03:08.878 00:03:08.878 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:09.136 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:09.394 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:09.394 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:09.394 [3/268] Linking static target lib/librte_kvargs.a 00:03:09.394 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:09.394 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:09.394 [6/268] Linking static target lib/librte_log.a 00:03:09.960 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.960 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:09.960 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:09.960 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:10.218 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:10.218 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:10.218 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:10.218 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:10.218 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:10.218 [16/268] Linking static target lib/librte_telemetry.a 00:03:10.476 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.476 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:10.476 [19/268] Linking target lib/librte_log.so.24.1 00:03:10.476 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:10.734 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:10.992 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:10.992 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:10.992 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:11.258 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:11.258 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:11.258 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:11.258 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:11.258 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.258 [30/268] Linking target lib/librte_telemetry.so.24.1 00:03:11.258 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:11.258 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:11.520 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:11.520 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:11.520 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:11.520 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:11.777 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:12.035 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:12.035 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:12.035 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:12.293 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:12.293 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:12.293 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:12.293 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:12.293 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:12.550 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:12.807 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:12.807 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:12.807 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:12.807 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:13.065 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:13.065 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:13.323 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:13.323 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:13.323 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:13.581 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:13.581 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:13.581 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:13.839 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:13.839 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:13.839 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:13.839 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:14.098 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:14.098 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:14.098 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:14.356 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:14.356 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:14.356 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:14.614 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:14.614 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:14.614 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:14.872 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:14.872 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:14.872 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:14.872 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:14.872 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:14.872 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:15.129 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:15.129 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:15.129 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:15.387 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:15.387 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:15.387 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:15.646 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:15.646 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:15.646 [86/268] Linking static target lib/librte_eal.a 00:03:15.904 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:15.904 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:15.904 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:15.904 [90/268] Linking static target lib/librte_rcu.a 00:03:15.904 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:16.162 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:16.162 [93/268] Linking static target lib/librte_ring.a 00:03:16.162 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:16.162 [95/268] Linking static target lib/librte_mempool.a 00:03:16.162 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:16.420 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:16.420 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:16.420 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.420 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.420 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:16.986 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:16.986 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:16.986 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:16.986 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:17.244 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:17.244 [107/268] Linking static target lib/librte_mbuf.a 00:03:17.244 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:17.244 [109/268] Linking static target lib/librte_meter.a 00:03:17.244 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:17.244 [111/268] Linking static target lib/librte_net.a 00:03:17.502 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:17.502 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.502 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:17.502 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:17.760 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.760 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.018 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:18.018 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:18.276 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.276 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:18.862 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:18.862 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:18.862 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:19.120 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:19.120 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:19.120 [127/268] Linking static target lib/librte_pci.a 00:03:19.120 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:19.120 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:19.379 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:19.379 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:19.379 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:19.379 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:19.379 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.379 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:19.638 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:19.638 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:19.638 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:19.638 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:19.638 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:19.638 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:19.638 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:19.638 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:19.638 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:19.897 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:19.897 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:20.155 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:20.155 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:20.414 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:20.414 [150/268] Linking static target lib/librte_cmdline.a 00:03:20.414 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:20.414 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:20.414 [153/268] Linking static target lib/librte_timer.a 00:03:20.673 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:20.932 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:20.932 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:20.932 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:20.932 [158/268] Linking static target lib/librte_ethdev.a 00:03:21.191 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:21.191 [160/268] Linking static target lib/librte_compressdev.a 00:03:21.191 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.191 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:21.191 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:21.449 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:21.449 [165/268] Linking static target lib/librte_hash.a 00:03:21.449 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:21.707 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:21.707 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:21.707 [169/268] Linking static target lib/librte_dmadev.a 00:03:21.966 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:21.966 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:21.966 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.966 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.224 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:22.483 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:22.742 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:22.742 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:22.742 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.742 [179/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.742 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:22.742 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:22.742 [182/268] Linking static target lib/librte_cryptodev.a 00:03:22.742 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:22.742 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:23.310 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:23.310 [186/268] Linking static target lib/librte_power.a 00:03:23.568 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:23.568 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:23.827 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:23.827 [190/268] Linking static target lib/librte_security.a 00:03:23.827 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:23.827 [192/268] Linking static target lib/librte_reorder.a 00:03:24.086 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:24.086 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:24.344 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.602 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.602 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.602 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:24.860 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:25.119 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.119 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:25.377 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:25.377 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:25.377 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:25.377 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:25.635 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:25.893 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:25.893 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:25.893 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:26.151 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:26.151 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:26.409 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:26.409 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:26.409 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.409 [215/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:26.409 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.409 [217/268] Linking static target drivers/librte_bus_vdev.a 00:03:26.409 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:26.409 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.410 [220/268] Linking static target drivers/librte_bus_pci.a 00:03:26.410 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.668 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:26.668 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.668 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.668 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:26.668 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.926 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.492 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:27.492 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.750 [230/268] Linking target lib/librte_eal.so.24.1 00:03:27.750 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:28.008 [232/268] Linking target lib/librte_timer.so.24.1 00:03:28.008 [233/268] Linking target lib/librte_pci.so.24.1 00:03:28.008 [234/268] Linking target lib/librte_dmadev.so.24.1 00:03:28.008 [235/268] Linking target lib/librte_meter.so.24.1 00:03:28.008 [236/268] Linking target lib/librte_ring.so.24.1 00:03:28.008 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:28.008 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:28.008 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:28.008 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:28.008 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:28.008 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:28.008 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:28.008 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:28.008 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:28.266 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:28.266 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:28.266 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:28.266 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:28.524 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:28.524 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:28.524 [252/268] Linking target lib/librte_net.so.24.1 00:03:28.524 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:28.524 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:28.524 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:28.524 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:28.782 [257/268] Linking target lib/librte_hash.so.24.1 00:03:28.782 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:28.782 [259/268] Linking target lib/librte_security.so.24.1 00:03:28.782 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:29.040 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.299 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:29.299 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:29.299 [264/268] Linking target lib/librte_power.so.24.1 00:03:32.587 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:32.587 [266/268] Linking static target lib/librte_vhost.a 00:03:33.963 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.963 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:33.963 INFO: autodetecting backend as ninja 00:03:33.963 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:52.048 CC lib/ut/ut.o 00:03:52.048 CC lib/ut_mock/mock.o 00:03:52.048 CC lib/log/log.o 00:03:52.048 CC lib/log/log_flags.o 00:03:52.048 CC lib/log/log_deprecated.o 00:03:52.048 LIB libspdk_ut.a 00:03:52.048 LIB libspdk_ut_mock.a 00:03:52.048 LIB libspdk_log.a 00:03:52.048 SO libspdk_ut.so.2.0 00:03:52.048 SO libspdk_ut_mock.so.6.0 00:03:52.048 SO libspdk_log.so.7.1 00:03:52.048 SYMLINK libspdk_ut.so 00:03:52.048 SYMLINK libspdk_ut_mock.so 00:03:52.048 SYMLINK libspdk_log.so 00:03:52.306 CC lib/dma/dma.o 00:03:52.306 CXX lib/trace_parser/trace.o 00:03:52.306 CC lib/ioat/ioat.o 00:03:52.306 CC lib/util/base64.o 00:03:52.306 CC lib/util/cpuset.o 00:03:52.306 CC lib/util/bit_array.o 00:03:52.306 CC lib/util/crc32.o 00:03:52.306 CC lib/util/crc32c.o 00:03:52.306 CC lib/util/crc16.o 00:03:52.306 CC lib/vfio_user/host/vfio_user_pci.o 00:03:52.306 CC lib/util/crc32_ieee.o 00:03:52.306 CC lib/util/crc64.o 00:03:52.306 CC lib/util/dif.o 00:03:52.564 CC lib/util/fd.o 00:03:52.564 LIB libspdk_dma.a 00:03:52.564 CC lib/vfio_user/host/vfio_user.o 00:03:52.564 SO libspdk_dma.so.5.0 00:03:52.564 CC lib/util/fd_group.o 00:03:52.564 CC lib/util/file.o 00:03:52.564 CC lib/util/hexlify.o 00:03:52.564 LIB libspdk_ioat.a 00:03:52.564 CC lib/util/iov.o 00:03:52.564 SYMLINK libspdk_dma.so 00:03:52.565 SO libspdk_ioat.so.7.0 00:03:52.565 CC lib/util/math.o 00:03:52.823 SYMLINK libspdk_ioat.so 00:03:52.823 CC lib/util/net.o 00:03:52.823 CC lib/util/pipe.o 00:03:52.823 LIB libspdk_vfio_user.a 00:03:52.823 CC lib/util/strerror_tls.o 00:03:52.823 SO libspdk_vfio_user.so.5.0 00:03:52.823 CC lib/util/string.o 00:03:52.823 SYMLINK libspdk_vfio_user.so 00:03:52.823 CC lib/util/uuid.o 00:03:52.823 CC lib/util/xor.o 00:03:52.823 CC lib/util/zipf.o 00:03:52.823 CC lib/util/md5.o 00:03:53.390 LIB libspdk_util.a 00:03:53.390 SO libspdk_util.so.10.1 00:03:53.390 LIB libspdk_trace_parser.a 00:03:53.390 SO libspdk_trace_parser.so.6.0 00:03:53.390 SYMLINK libspdk_util.so 00:03:53.649 SYMLINK libspdk_trace_parser.so 00:03:53.649 CC lib/rdma_utils/rdma_utils.o 00:03:53.649 CC lib/conf/conf.o 00:03:53.649 CC lib/env_dpdk/env.o 00:03:53.649 CC lib/env_dpdk/memory.o 00:03:53.649 CC lib/idxd/idxd.o 00:03:53.649 CC lib/env_dpdk/pci.o 00:03:53.649 CC lib/idxd/idxd_user.o 00:03:53.649 CC lib/env_dpdk/init.o 00:03:53.649 CC lib/vmd/vmd.o 00:03:53.649 CC lib/json/json_parse.o 00:03:53.908 LIB libspdk_conf.a 00:03:53.908 SO libspdk_conf.so.6.0 00:03:53.908 CC lib/idxd/idxd_kernel.o 00:03:53.908 CC lib/json/json_util.o 00:03:53.908 LIB libspdk_rdma_utils.a 00:03:54.166 SYMLINK libspdk_conf.so 00:03:54.166 CC lib/json/json_write.o 00:03:54.166 SO libspdk_rdma_utils.so.1.0 00:03:54.166 SYMLINK libspdk_rdma_utils.so 00:03:54.166 CC lib/env_dpdk/threads.o 00:03:54.166 CC lib/env_dpdk/pci_ioat.o 00:03:54.166 CC lib/env_dpdk/pci_virtio.o 00:03:54.166 CC lib/env_dpdk/pci_vmd.o 00:03:54.166 CC lib/env_dpdk/pci_idxd.o 00:03:54.425 CC lib/env_dpdk/pci_event.o 00:03:54.425 CC lib/vmd/led.o 00:03:54.425 CC lib/env_dpdk/sigbus_handler.o 00:03:54.425 LIB libspdk_json.a 00:03:54.425 CC lib/env_dpdk/pci_dpdk.o 00:03:54.425 SO libspdk_json.so.6.0 00:03:54.425 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:54.425 CC lib/rdma_provider/common.o 00:03:54.425 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:54.425 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:54.425 LIB libspdk_idxd.a 00:03:54.425 SYMLINK libspdk_json.so 00:03:54.425 SO libspdk_idxd.so.12.1 00:03:54.425 LIB libspdk_vmd.a 00:03:54.684 SO libspdk_vmd.so.6.0 00:03:54.684 SYMLINK libspdk_idxd.so 00:03:54.684 SYMLINK libspdk_vmd.so 00:03:54.684 CC lib/jsonrpc/jsonrpc_server.o 00:03:54.684 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:54.684 CC lib/jsonrpc/jsonrpc_client.o 00:03:54.684 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:54.684 LIB libspdk_rdma_provider.a 00:03:54.684 SO libspdk_rdma_provider.so.7.0 00:03:54.970 SYMLINK libspdk_rdma_provider.so 00:03:54.970 LIB libspdk_jsonrpc.a 00:03:54.970 SO libspdk_jsonrpc.so.6.0 00:03:55.256 SYMLINK libspdk_jsonrpc.so 00:03:55.515 CC lib/rpc/rpc.o 00:03:55.515 LIB libspdk_env_dpdk.a 00:03:55.515 SO libspdk_env_dpdk.so.15.1 00:03:55.774 LIB libspdk_rpc.a 00:03:55.774 SO libspdk_rpc.so.6.0 00:03:55.774 SYMLINK libspdk_rpc.so 00:03:55.774 SYMLINK libspdk_env_dpdk.so 00:03:56.033 CC lib/keyring/keyring.o 00:03:56.033 CC lib/keyring/keyring_rpc.o 00:03:56.033 CC lib/trace/trace.o 00:03:56.033 CC lib/trace/trace_rpc.o 00:03:56.033 CC lib/trace/trace_flags.o 00:03:56.033 CC lib/notify/notify.o 00:03:56.033 CC lib/notify/notify_rpc.o 00:03:56.293 LIB libspdk_notify.a 00:03:56.293 LIB libspdk_keyring.a 00:03:56.293 SO libspdk_notify.so.6.0 00:03:56.293 SO libspdk_keyring.so.2.0 00:03:56.293 SYMLINK libspdk_notify.so 00:03:56.293 SYMLINK libspdk_keyring.so 00:03:56.293 LIB libspdk_trace.a 00:03:56.293 SO libspdk_trace.so.11.0 00:03:56.552 SYMLINK libspdk_trace.so 00:03:56.811 CC lib/sock/sock.o 00:03:56.811 CC lib/sock/sock_rpc.o 00:03:56.811 CC lib/thread/thread.o 00:03:56.811 CC lib/thread/iobuf.o 00:03:57.380 LIB libspdk_sock.a 00:03:57.380 SO libspdk_sock.so.10.0 00:03:57.380 SYMLINK libspdk_sock.so 00:03:57.947 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:57.947 CC lib/nvme/nvme_ctrlr.o 00:03:57.947 CC lib/nvme/nvme_fabric.o 00:03:57.947 CC lib/nvme/nvme_ns.o 00:03:57.947 CC lib/nvme/nvme_ns_cmd.o 00:03:57.947 CC lib/nvme/nvme_pcie_common.o 00:03:57.947 CC lib/nvme/nvme_pcie.o 00:03:57.947 CC lib/nvme/nvme.o 00:03:57.947 CC lib/nvme/nvme_qpair.o 00:03:58.514 CC lib/nvme/nvme_quirks.o 00:03:58.772 CC lib/nvme/nvme_transport.o 00:03:58.772 CC lib/nvme/nvme_discovery.o 00:03:58.772 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:58.772 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:58.772 LIB libspdk_thread.a 00:03:59.030 SO libspdk_thread.so.11.0 00:03:59.030 CC lib/nvme/nvme_tcp.o 00:03:59.030 CC lib/nvme/nvme_opal.o 00:03:59.030 SYMLINK libspdk_thread.so 00:03:59.030 CC lib/nvme/nvme_io_msg.o 00:03:59.289 CC lib/nvme/nvme_poll_group.o 00:03:59.289 CC lib/nvme/nvme_zns.o 00:03:59.289 CC lib/nvme/nvme_stubs.o 00:03:59.289 CC lib/nvme/nvme_auth.o 00:03:59.547 CC lib/nvme/nvme_cuse.o 00:03:59.547 CC lib/nvme/nvme_vfio_user.o 00:03:59.806 CC lib/nvme/nvme_rdma.o 00:03:59.806 CC lib/accel/accel.o 00:03:59.806 CC lib/accel/accel_rpc.o 00:03:59.806 CC lib/accel/accel_sw.o 00:04:00.065 CC lib/blob/blobstore.o 00:04:00.323 CC lib/init/json_config.o 00:04:00.582 CC lib/virtio/virtio.o 00:04:00.582 CC lib/vfu_tgt/tgt_endpoint.o 00:04:00.582 CC lib/vfu_tgt/tgt_rpc.o 00:04:00.582 CC lib/init/subsystem.o 00:04:00.582 CC lib/blob/request.o 00:04:00.582 CC lib/init/subsystem_rpc.o 00:04:00.840 CC lib/fsdev/fsdev.o 00:04:00.840 CC lib/fsdev/fsdev_io.o 00:04:00.840 CC lib/blob/zeroes.o 00:04:00.840 CC lib/init/rpc.o 00:04:00.840 CC lib/virtio/virtio_vhost_user.o 00:04:00.840 LIB libspdk_vfu_tgt.a 00:04:00.840 SO libspdk_vfu_tgt.so.3.0 00:04:01.099 CC lib/blob/blob_bs_dev.o 00:04:01.099 SYMLINK libspdk_vfu_tgt.so 00:04:01.099 CC lib/fsdev/fsdev_rpc.o 00:04:01.099 LIB libspdk_init.a 00:04:01.099 CC lib/virtio/virtio_vfio_user.o 00:04:01.099 SO libspdk_init.so.6.0 00:04:01.357 SYMLINK libspdk_init.so 00:04:01.357 CC lib/virtio/virtio_pci.o 00:04:01.357 LIB libspdk_accel.a 00:04:01.357 SO libspdk_accel.so.16.0 00:04:01.615 CC lib/event/app.o 00:04:01.615 CC lib/event/log_rpc.o 00:04:01.615 CC lib/event/app_rpc.o 00:04:01.615 CC lib/event/reactor.o 00:04:01.615 CC lib/event/scheduler_static.o 00:04:01.615 LIB libspdk_fsdev.a 00:04:01.615 SYMLINK libspdk_accel.so 00:04:01.615 LIB libspdk_nvme.a 00:04:01.615 SO libspdk_fsdev.so.2.0 00:04:01.615 LIB libspdk_virtio.a 00:04:01.615 SYMLINK libspdk_fsdev.so 00:04:01.615 SO libspdk_virtio.so.7.0 00:04:01.615 CC lib/bdev/bdev.o 00:04:01.615 CC lib/bdev/bdev_rpc.o 00:04:01.615 CC lib/bdev/bdev_zone.o 00:04:01.873 SYMLINK libspdk_virtio.so 00:04:01.873 CC lib/bdev/part.o 00:04:01.873 SO libspdk_nvme.so.15.0 00:04:01.873 CC lib/bdev/scsi_nvme.o 00:04:01.873 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:02.132 SYMLINK libspdk_nvme.so 00:04:02.132 LIB libspdk_event.a 00:04:02.132 SO libspdk_event.so.14.0 00:04:02.390 SYMLINK libspdk_event.so 00:04:02.648 LIB libspdk_fuse_dispatcher.a 00:04:02.648 SO libspdk_fuse_dispatcher.so.1.0 00:04:02.906 SYMLINK libspdk_fuse_dispatcher.so 00:04:04.281 LIB libspdk_blob.a 00:04:04.281 SO libspdk_blob.so.12.0 00:04:04.539 SYMLINK libspdk_blob.so 00:04:04.798 CC lib/blobfs/blobfs.o 00:04:04.798 CC lib/blobfs/tree.o 00:04:04.798 CC lib/lvol/lvol.o 00:04:05.057 LIB libspdk_bdev.a 00:04:05.315 SO libspdk_bdev.so.17.0 00:04:05.316 SYMLINK libspdk_bdev.so 00:04:05.574 CC lib/nbd/nbd.o 00:04:05.574 CC lib/nbd/nbd_rpc.o 00:04:05.574 CC lib/ublk/ublk.o 00:04:05.574 CC lib/ublk/ublk_rpc.o 00:04:05.574 CC lib/ftl/ftl_core.o 00:04:05.574 CC lib/ftl/ftl_init.o 00:04:05.574 CC lib/scsi/dev.o 00:04:05.574 CC lib/nvmf/ctrlr.o 00:04:05.832 CC lib/ftl/ftl_layout.o 00:04:05.832 CC lib/ftl/ftl_debug.o 00:04:05.832 CC lib/ftl/ftl_io.o 00:04:05.832 CC lib/scsi/lun.o 00:04:05.832 LIB libspdk_blobfs.a 00:04:05.832 LIB libspdk_lvol.a 00:04:06.091 SO libspdk_blobfs.so.11.0 00:04:06.091 SO libspdk_lvol.so.11.0 00:04:06.091 CC lib/scsi/port.o 00:04:06.091 SYMLINK libspdk_lvol.so 00:04:06.091 SYMLINK libspdk_blobfs.so 00:04:06.091 CC lib/nvmf/ctrlr_discovery.o 00:04:06.091 CC lib/nvmf/ctrlr_bdev.o 00:04:06.091 CC lib/scsi/scsi.o 00:04:06.091 LIB libspdk_nbd.a 00:04:06.091 CC lib/ftl/ftl_sb.o 00:04:06.091 CC lib/ftl/ftl_l2p.o 00:04:06.091 SO libspdk_nbd.so.7.0 00:04:06.349 CC lib/nvmf/subsystem.o 00:04:06.349 SYMLINK libspdk_nbd.so 00:04:06.349 CC lib/nvmf/nvmf.o 00:04:06.349 CC lib/ftl/ftl_l2p_flat.o 00:04:06.349 CC lib/scsi/scsi_bdev.o 00:04:06.349 CC lib/ftl/ftl_nv_cache.o 00:04:06.349 CC lib/nvmf/nvmf_rpc.o 00:04:06.349 LIB libspdk_ublk.a 00:04:06.607 SO libspdk_ublk.so.3.0 00:04:06.607 CC lib/nvmf/transport.o 00:04:06.607 SYMLINK libspdk_ublk.so 00:04:06.607 CC lib/nvmf/tcp.o 00:04:06.607 CC lib/nvmf/stubs.o 00:04:06.866 CC lib/scsi/scsi_pr.o 00:04:06.866 CC lib/nvmf/mdns_server.o 00:04:07.123 CC lib/nvmf/vfio_user.o 00:04:07.381 CC lib/scsi/scsi_rpc.o 00:04:07.381 CC lib/nvmf/rdma.o 00:04:07.381 CC lib/nvmf/auth.o 00:04:07.381 CC lib/scsi/task.o 00:04:07.381 CC lib/ftl/ftl_band.o 00:04:07.381 CC lib/ftl/ftl_band_ops.o 00:04:07.640 CC lib/ftl/ftl_writer.o 00:04:07.640 LIB libspdk_scsi.a 00:04:07.640 SO libspdk_scsi.so.9.0 00:04:07.898 CC lib/ftl/ftl_rq.o 00:04:07.898 SYMLINK libspdk_scsi.so 00:04:07.898 CC lib/ftl/ftl_reloc.o 00:04:07.898 CC lib/ftl/ftl_l2p_cache.o 00:04:07.898 CC lib/ftl/ftl_p2l.o 00:04:08.156 CC lib/ftl/ftl_p2l_log.o 00:04:08.156 CC lib/iscsi/conn.o 00:04:08.156 CC lib/vhost/vhost.o 00:04:08.415 CC lib/ftl/mngt/ftl_mngt.o 00:04:08.415 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:08.415 CC lib/iscsi/init_grp.o 00:04:08.415 CC lib/iscsi/iscsi.o 00:04:08.673 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:08.673 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:08.673 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:08.673 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:08.673 CC lib/iscsi/param.o 00:04:08.932 CC lib/iscsi/portal_grp.o 00:04:08.932 CC lib/iscsi/tgt_node.o 00:04:08.932 CC lib/iscsi/iscsi_subsystem.o 00:04:09.190 CC lib/iscsi/iscsi_rpc.o 00:04:09.190 CC lib/vhost/vhost_rpc.o 00:04:09.190 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:09.190 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:09.190 CC lib/vhost/vhost_scsi.o 00:04:09.190 CC lib/vhost/vhost_blk.o 00:04:09.447 CC lib/iscsi/task.o 00:04:09.447 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:09.447 CC lib/vhost/rte_vhost_user.o 00:04:09.447 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:09.706 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:09.706 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:09.706 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:09.706 CC lib/ftl/utils/ftl_conf.o 00:04:09.706 CC lib/ftl/utils/ftl_md.o 00:04:09.964 CC lib/ftl/utils/ftl_mempool.o 00:04:09.964 CC lib/ftl/utils/ftl_bitmap.o 00:04:09.964 CC lib/ftl/utils/ftl_property.o 00:04:09.964 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:10.223 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:10.223 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:10.223 LIB libspdk_nvmf.a 00:04:10.223 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:10.223 LIB libspdk_iscsi.a 00:04:10.481 SO libspdk_nvmf.so.20.0 00:04:10.481 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:10.481 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:10.481 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:10.481 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:10.481 SO libspdk_iscsi.so.8.0 00:04:10.481 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:10.481 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:10.481 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:10.481 SYMLINK libspdk_iscsi.so 00:04:10.481 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:10.740 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:10.740 CC lib/ftl/base/ftl_base_dev.o 00:04:10.740 CC lib/ftl/base/ftl_base_bdev.o 00:04:10.740 CC lib/ftl/ftl_trace.o 00:04:10.740 SYMLINK libspdk_nvmf.so 00:04:10.740 LIB libspdk_vhost.a 00:04:10.740 SO libspdk_vhost.so.8.0 00:04:10.999 SYMLINK libspdk_vhost.so 00:04:10.999 LIB libspdk_ftl.a 00:04:11.258 SO libspdk_ftl.so.9.0 00:04:11.515 SYMLINK libspdk_ftl.so 00:04:11.773 CC module/vfu_device/vfu_virtio.o 00:04:11.773 CC module/env_dpdk/env_dpdk_rpc.o 00:04:11.773 CC module/blob/bdev/blob_bdev.o 00:04:11.773 CC module/keyring/file/keyring.o 00:04:11.773 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:11.773 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:11.773 CC module/sock/posix/posix.o 00:04:11.773 CC module/scheduler/gscheduler/gscheduler.o 00:04:11.773 CC module/accel/error/accel_error.o 00:04:11.773 CC module/fsdev/aio/fsdev_aio.o 00:04:12.032 LIB libspdk_env_dpdk_rpc.a 00:04:12.032 SO libspdk_env_dpdk_rpc.so.6.0 00:04:12.032 SYMLINK libspdk_env_dpdk_rpc.so 00:04:12.032 CC module/vfu_device/vfu_virtio_blk.o 00:04:12.032 LIB libspdk_scheduler_dpdk_governor.a 00:04:12.032 CC module/keyring/file/keyring_rpc.o 00:04:12.032 LIB libspdk_scheduler_gscheduler.a 00:04:12.032 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:12.032 SO libspdk_scheduler_gscheduler.so.4.0 00:04:12.032 LIB libspdk_scheduler_dynamic.a 00:04:12.032 CC module/accel/error/accel_error_rpc.o 00:04:12.032 SO libspdk_scheduler_dynamic.so.4.0 00:04:12.032 SYMLINK libspdk_scheduler_gscheduler.so 00:04:12.290 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:12.290 LIB libspdk_blob_bdev.a 00:04:12.290 LIB libspdk_keyring_file.a 00:04:12.290 SYMLINK libspdk_scheduler_dynamic.so 00:04:12.290 SO libspdk_blob_bdev.so.12.0 00:04:12.290 SO libspdk_keyring_file.so.2.0 00:04:12.290 LIB libspdk_accel_error.a 00:04:12.290 SYMLINK libspdk_blob_bdev.so 00:04:12.290 CC module/vfu_device/vfu_virtio_scsi.o 00:04:12.290 SYMLINK libspdk_keyring_file.so 00:04:12.290 SO libspdk_accel_error.so.2.0 00:04:12.290 CC module/accel/dsa/accel_dsa.o 00:04:12.290 CC module/accel/dsa/accel_dsa_rpc.o 00:04:12.290 CC module/accel/ioat/accel_ioat.o 00:04:12.290 SYMLINK libspdk_accel_error.so 00:04:12.290 CC module/sock/uring/uring.o 00:04:12.549 CC module/keyring/linux/keyring.o 00:04:12.549 CC module/keyring/linux/keyring_rpc.o 00:04:12.549 CC module/accel/ioat/accel_ioat_rpc.o 00:04:12.549 CC module/accel/iaa/accel_iaa.o 00:04:12.549 CC module/accel/iaa/accel_iaa_rpc.o 00:04:12.807 LIB libspdk_keyring_linux.a 00:04:12.807 LIB libspdk_accel_dsa.a 00:04:12.807 LIB libspdk_accel_ioat.a 00:04:12.807 SO libspdk_keyring_linux.so.1.0 00:04:12.807 SO libspdk_accel_dsa.so.5.0 00:04:12.807 SO libspdk_accel_ioat.so.6.0 00:04:12.807 CC module/vfu_device/vfu_virtio_rpc.o 00:04:12.807 CC module/vfu_device/vfu_virtio_fs.o 00:04:12.807 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:12.807 LIB libspdk_accel_iaa.a 00:04:12.807 LIB libspdk_sock_posix.a 00:04:12.807 SYMLINK libspdk_accel_dsa.so 00:04:12.807 SYMLINK libspdk_keyring_linux.so 00:04:12.807 SO libspdk_sock_posix.so.6.0 00:04:12.807 SO libspdk_accel_iaa.so.3.0 00:04:12.807 SYMLINK libspdk_accel_ioat.so 00:04:12.807 SYMLINK libspdk_accel_iaa.so 00:04:13.066 CC module/fsdev/aio/linux_aio_mgr.o 00:04:13.066 SYMLINK libspdk_sock_posix.so 00:04:13.066 CC module/bdev/delay/vbdev_delay.o 00:04:13.066 CC module/bdev/error/vbdev_error.o 00:04:13.066 LIB libspdk_vfu_device.a 00:04:13.066 CC module/bdev/gpt/gpt.o 00:04:13.066 CC module/blobfs/bdev/blobfs_bdev.o 00:04:13.066 CC module/bdev/lvol/vbdev_lvol.o 00:04:13.066 SO libspdk_vfu_device.so.3.0 00:04:13.066 CC module/bdev/malloc/bdev_malloc.o 00:04:13.066 LIB libspdk_fsdev_aio.a 00:04:13.324 SO libspdk_fsdev_aio.so.1.0 00:04:13.324 CC module/bdev/null/bdev_null.o 00:04:13.324 SYMLINK libspdk_vfu_device.so 00:04:13.324 CC module/bdev/null/bdev_null_rpc.o 00:04:13.324 SYMLINK libspdk_fsdev_aio.so 00:04:13.324 CC module/bdev/error/vbdev_error_rpc.o 00:04:13.324 CC module/bdev/gpt/vbdev_gpt.o 00:04:13.324 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:13.324 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:13.324 LIB libspdk_sock_uring.a 00:04:13.324 SO libspdk_sock_uring.so.5.0 00:04:13.324 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:13.582 SYMLINK libspdk_sock_uring.so 00:04:13.582 LIB libspdk_bdev_error.a 00:04:13.582 SO libspdk_bdev_error.so.6.0 00:04:13.582 LIB libspdk_blobfs_bdev.a 00:04:13.582 SO libspdk_blobfs_bdev.so.6.0 00:04:13.582 LIB libspdk_bdev_null.a 00:04:13.582 SYMLINK libspdk_bdev_error.so 00:04:13.582 SO libspdk_bdev_null.so.6.0 00:04:13.582 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:13.582 SYMLINK libspdk_blobfs_bdev.so 00:04:13.582 LIB libspdk_bdev_delay.a 00:04:13.582 CC module/bdev/nvme/bdev_nvme.o 00:04:13.582 LIB libspdk_bdev_gpt.a 00:04:13.582 CC module/bdev/passthru/vbdev_passthru.o 00:04:13.582 LIB libspdk_bdev_malloc.a 00:04:13.582 SYMLINK libspdk_bdev_null.so 00:04:13.582 SO libspdk_bdev_delay.so.6.0 00:04:13.582 SO libspdk_bdev_gpt.so.6.0 00:04:13.582 SO libspdk_bdev_malloc.so.6.0 00:04:13.841 CC module/bdev/raid/bdev_raid.o 00:04:13.841 SYMLINK libspdk_bdev_delay.so 00:04:13.841 SYMLINK libspdk_bdev_gpt.so 00:04:13.841 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:13.841 SYMLINK libspdk_bdev_malloc.so 00:04:13.841 CC module/bdev/split/vbdev_split.o 00:04:13.841 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:13.841 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:13.841 CC module/bdev/aio/bdev_aio.o 00:04:14.099 CC module/bdev/uring/bdev_uring.o 00:04:14.099 CC module/bdev/ftl/bdev_ftl.o 00:04:14.099 LIB libspdk_bdev_passthru.a 00:04:14.099 SO libspdk_bdev_passthru.so.6.0 00:04:14.099 LIB libspdk_bdev_lvol.a 00:04:14.099 CC module/bdev/split/vbdev_split_rpc.o 00:04:14.099 SO libspdk_bdev_lvol.so.6.0 00:04:14.099 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:14.099 SYMLINK libspdk_bdev_passthru.so 00:04:14.099 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:14.099 SYMLINK libspdk_bdev_lvol.so 00:04:14.357 LIB libspdk_bdev_zone_block.a 00:04:14.357 LIB libspdk_bdev_split.a 00:04:14.357 SO libspdk_bdev_zone_block.so.6.0 00:04:14.357 SO libspdk_bdev_split.so.6.0 00:04:14.357 CC module/bdev/raid/bdev_raid_rpc.o 00:04:14.357 LIB libspdk_bdev_ftl.a 00:04:14.357 CC module/bdev/aio/bdev_aio_rpc.o 00:04:14.358 SYMLINK libspdk_bdev_zone_block.so 00:04:14.358 SYMLINK libspdk_bdev_split.so 00:04:14.358 CC module/bdev/nvme/nvme_rpc.o 00:04:14.358 SO libspdk_bdev_ftl.so.6.0 00:04:14.358 CC module/bdev/uring/bdev_uring_rpc.o 00:04:14.358 CC module/bdev/iscsi/bdev_iscsi.o 00:04:14.618 SYMLINK libspdk_bdev_ftl.so 00:04:14.618 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:14.618 LIB libspdk_bdev_aio.a 00:04:14.618 SO libspdk_bdev_aio.so.6.0 00:04:14.618 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:14.618 CC module/bdev/nvme/bdev_mdns_client.o 00:04:14.618 LIB libspdk_bdev_uring.a 00:04:14.618 SO libspdk_bdev_uring.so.6.0 00:04:14.618 SYMLINK libspdk_bdev_aio.so 00:04:14.618 CC module/bdev/nvme/vbdev_opal.o 00:04:14.618 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:14.618 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:14.923 SYMLINK libspdk_bdev_uring.so 00:04:14.923 CC module/bdev/raid/bdev_raid_sb.o 00:04:14.923 CC module/bdev/raid/raid0.o 00:04:14.923 CC module/bdev/raid/raid1.o 00:04:14.923 LIB libspdk_bdev_iscsi.a 00:04:14.923 SO libspdk_bdev_iscsi.so.6.0 00:04:14.923 CC module/bdev/raid/concat.o 00:04:14.923 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:14.923 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:14.923 SYMLINK libspdk_bdev_iscsi.so 00:04:15.194 LIB libspdk_bdev_raid.a 00:04:15.194 LIB libspdk_bdev_virtio.a 00:04:15.194 SO libspdk_bdev_raid.so.6.0 00:04:15.452 SO libspdk_bdev_virtio.so.6.0 00:04:15.452 SYMLINK libspdk_bdev_raid.so 00:04:15.452 SYMLINK libspdk_bdev_virtio.so 00:04:16.829 LIB libspdk_bdev_nvme.a 00:04:16.829 SO libspdk_bdev_nvme.so.7.1 00:04:17.087 SYMLINK libspdk_bdev_nvme.so 00:04:17.655 CC module/event/subsystems/iobuf/iobuf.o 00:04:17.655 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:17.655 CC module/event/subsystems/sock/sock.o 00:04:17.655 CC module/event/subsystems/fsdev/fsdev.o 00:04:17.655 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:17.655 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:17.655 CC module/event/subsystems/scheduler/scheduler.o 00:04:17.655 CC module/event/subsystems/vmd/vmd.o 00:04:17.655 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:17.655 CC module/event/subsystems/keyring/keyring.o 00:04:17.655 LIB libspdk_event_keyring.a 00:04:17.655 LIB libspdk_event_vhost_blk.a 00:04:17.655 LIB libspdk_event_sock.a 00:04:17.655 SO libspdk_event_keyring.so.1.0 00:04:17.655 SO libspdk_event_vhost_blk.so.3.0 00:04:17.655 LIB libspdk_event_fsdev.a 00:04:17.655 LIB libspdk_event_vfu_tgt.a 00:04:17.655 LIB libspdk_event_vmd.a 00:04:17.655 LIB libspdk_event_iobuf.a 00:04:17.655 LIB libspdk_event_scheduler.a 00:04:17.655 SO libspdk_event_sock.so.5.0 00:04:17.655 SO libspdk_event_fsdev.so.1.0 00:04:17.913 SO libspdk_event_vfu_tgt.so.3.0 00:04:17.913 SO libspdk_event_vmd.so.6.0 00:04:17.913 SO libspdk_event_scheduler.so.4.0 00:04:17.913 SYMLINK libspdk_event_keyring.so 00:04:17.913 SO libspdk_event_iobuf.so.3.0 00:04:17.913 SYMLINK libspdk_event_vhost_blk.so 00:04:17.913 SYMLINK libspdk_event_sock.so 00:04:17.913 SYMLINK libspdk_event_fsdev.so 00:04:17.913 SYMLINK libspdk_event_vfu_tgt.so 00:04:17.913 SYMLINK libspdk_event_scheduler.so 00:04:17.913 SYMLINK libspdk_event_vmd.so 00:04:17.914 SYMLINK libspdk_event_iobuf.so 00:04:18.172 CC module/event/subsystems/accel/accel.o 00:04:18.172 LIB libspdk_event_accel.a 00:04:18.430 SO libspdk_event_accel.so.6.0 00:04:18.430 SYMLINK libspdk_event_accel.so 00:04:18.688 CC module/event/subsystems/bdev/bdev.o 00:04:18.946 LIB libspdk_event_bdev.a 00:04:18.946 SO libspdk_event_bdev.so.6.0 00:04:18.946 SYMLINK libspdk_event_bdev.so 00:04:19.205 CC module/event/subsystems/nbd/nbd.o 00:04:19.205 CC module/event/subsystems/scsi/scsi.o 00:04:19.205 CC module/event/subsystems/ublk/ublk.o 00:04:19.205 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:19.205 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:19.464 LIB libspdk_event_ublk.a 00:04:19.464 LIB libspdk_event_nbd.a 00:04:19.464 LIB libspdk_event_scsi.a 00:04:19.464 SO libspdk_event_nbd.so.6.0 00:04:19.464 SO libspdk_event_ublk.so.3.0 00:04:19.464 SO libspdk_event_scsi.so.6.0 00:04:19.464 SYMLINK libspdk_event_nbd.so 00:04:19.464 SYMLINK libspdk_event_ublk.so 00:04:19.464 SYMLINK libspdk_event_scsi.so 00:04:19.464 LIB libspdk_event_nvmf.a 00:04:19.464 SO libspdk_event_nvmf.so.6.0 00:04:19.722 SYMLINK libspdk_event_nvmf.so 00:04:19.723 CC module/event/subsystems/iscsi/iscsi.o 00:04:19.723 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:19.982 LIB libspdk_event_vhost_scsi.a 00:04:19.982 SO libspdk_event_vhost_scsi.so.3.0 00:04:19.982 LIB libspdk_event_iscsi.a 00:04:19.982 SO libspdk_event_iscsi.so.6.0 00:04:19.982 SYMLINK libspdk_event_vhost_scsi.so 00:04:20.241 SYMLINK libspdk_event_iscsi.so 00:04:20.241 SO libspdk.so.6.0 00:04:20.241 SYMLINK libspdk.so 00:04:20.499 CC app/spdk_lspci/spdk_lspci.o 00:04:20.499 CXX app/trace/trace.o 00:04:20.499 CC app/trace_record/trace_record.o 00:04:20.499 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:20.499 CC app/iscsi_tgt/iscsi_tgt.o 00:04:20.499 CC app/nvmf_tgt/nvmf_main.o 00:04:20.758 CC app/spdk_tgt/spdk_tgt.o 00:04:20.758 CC examples/util/zipf/zipf.o 00:04:20.758 CC examples/ioat/perf/perf.o 00:04:20.758 CC test/thread/poller_perf/poller_perf.o 00:04:20.758 LINK spdk_lspci 00:04:20.758 LINK interrupt_tgt 00:04:20.758 LINK nvmf_tgt 00:04:21.016 LINK zipf 00:04:21.016 LINK iscsi_tgt 00:04:21.016 LINK poller_perf 00:04:21.016 LINK spdk_trace_record 00:04:21.016 LINK spdk_tgt 00:04:21.016 LINK ioat_perf 00:04:21.016 CC app/spdk_nvme_perf/perf.o 00:04:21.016 LINK spdk_trace 00:04:21.016 CC app/spdk_nvme_identify/identify.o 00:04:21.274 CC app/spdk_nvme_discover/discovery_aer.o 00:04:21.274 CC app/spdk_top/spdk_top.o 00:04:21.274 CC examples/ioat/verify/verify.o 00:04:21.274 CC app/spdk_dd/spdk_dd.o 00:04:21.274 CC test/dma/test_dma/test_dma.o 00:04:21.274 CC test/app/bdev_svc/bdev_svc.o 00:04:21.274 LINK spdk_nvme_discover 00:04:21.532 CC app/fio/nvme/fio_plugin.o 00:04:21.532 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:21.532 LINK verify 00:04:21.532 LINK bdev_svc 00:04:21.532 CC test/app/histogram_perf/histogram_perf.o 00:04:21.791 LINK histogram_perf 00:04:21.791 LINK spdk_dd 00:04:21.791 CC examples/thread/thread/thread_ex.o 00:04:21.791 LINK test_dma 00:04:22.050 LINK nvme_fuzz 00:04:22.050 CC app/vhost/vhost.o 00:04:22.050 CC test/app/jsoncat/jsoncat.o 00:04:22.050 LINK spdk_nvme 00:04:22.050 CC test/app/stub/stub.o 00:04:22.050 LINK spdk_nvme_perf 00:04:22.050 LINK thread 00:04:22.309 LINK vhost 00:04:22.309 LINK spdk_nvme_identify 00:04:22.309 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:22.309 TEST_HEADER include/spdk/accel.h 00:04:22.309 TEST_HEADER include/spdk/accel_module.h 00:04:22.309 TEST_HEADER include/spdk/assert.h 00:04:22.309 TEST_HEADER include/spdk/barrier.h 00:04:22.309 TEST_HEADER include/spdk/base64.h 00:04:22.309 TEST_HEADER include/spdk/bdev.h 00:04:22.309 LINK jsoncat 00:04:22.309 TEST_HEADER include/spdk/bdev_module.h 00:04:22.309 TEST_HEADER include/spdk/bdev_zone.h 00:04:22.309 TEST_HEADER include/spdk/bit_array.h 00:04:22.309 TEST_HEADER include/spdk/bit_pool.h 00:04:22.309 TEST_HEADER include/spdk/blob_bdev.h 00:04:22.309 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:22.309 TEST_HEADER include/spdk/blobfs.h 00:04:22.309 TEST_HEADER include/spdk/blob.h 00:04:22.309 TEST_HEADER include/spdk/conf.h 00:04:22.309 TEST_HEADER include/spdk/config.h 00:04:22.309 TEST_HEADER include/spdk/cpuset.h 00:04:22.309 TEST_HEADER include/spdk/crc16.h 00:04:22.309 TEST_HEADER include/spdk/crc32.h 00:04:22.309 TEST_HEADER include/spdk/crc64.h 00:04:22.309 TEST_HEADER include/spdk/dif.h 00:04:22.309 TEST_HEADER include/spdk/dma.h 00:04:22.309 TEST_HEADER include/spdk/endian.h 00:04:22.309 TEST_HEADER include/spdk/env_dpdk.h 00:04:22.309 TEST_HEADER include/spdk/env.h 00:04:22.309 TEST_HEADER include/spdk/event.h 00:04:22.309 TEST_HEADER include/spdk/fd_group.h 00:04:22.309 TEST_HEADER include/spdk/fd.h 00:04:22.309 TEST_HEADER include/spdk/file.h 00:04:22.309 TEST_HEADER include/spdk/fsdev.h 00:04:22.309 TEST_HEADER include/spdk/fsdev_module.h 00:04:22.309 TEST_HEADER include/spdk/ftl.h 00:04:22.309 TEST_HEADER include/spdk/gpt_spec.h 00:04:22.309 TEST_HEADER include/spdk/hexlify.h 00:04:22.309 TEST_HEADER include/spdk/histogram_data.h 00:04:22.309 TEST_HEADER include/spdk/idxd.h 00:04:22.309 TEST_HEADER include/spdk/idxd_spec.h 00:04:22.309 TEST_HEADER include/spdk/init.h 00:04:22.309 TEST_HEADER include/spdk/ioat.h 00:04:22.309 TEST_HEADER include/spdk/ioat_spec.h 00:04:22.309 TEST_HEADER include/spdk/iscsi_spec.h 00:04:22.309 TEST_HEADER include/spdk/json.h 00:04:22.309 TEST_HEADER include/spdk/jsonrpc.h 00:04:22.309 TEST_HEADER include/spdk/keyring.h 00:04:22.309 TEST_HEADER include/spdk/keyring_module.h 00:04:22.309 TEST_HEADER include/spdk/likely.h 00:04:22.309 TEST_HEADER include/spdk/log.h 00:04:22.309 TEST_HEADER include/spdk/lvol.h 00:04:22.309 TEST_HEADER include/spdk/md5.h 00:04:22.309 TEST_HEADER include/spdk/memory.h 00:04:22.309 TEST_HEADER include/spdk/mmio.h 00:04:22.309 TEST_HEADER include/spdk/nbd.h 00:04:22.309 TEST_HEADER include/spdk/net.h 00:04:22.309 TEST_HEADER include/spdk/notify.h 00:04:22.309 TEST_HEADER include/spdk/nvme.h 00:04:22.309 TEST_HEADER include/spdk/nvme_intel.h 00:04:22.309 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:22.309 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:22.309 TEST_HEADER include/spdk/nvme_spec.h 00:04:22.309 TEST_HEADER include/spdk/nvme_zns.h 00:04:22.309 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:22.309 LINK spdk_top 00:04:22.309 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:22.309 TEST_HEADER include/spdk/nvmf.h 00:04:22.309 TEST_HEADER include/spdk/nvmf_spec.h 00:04:22.309 LINK stub 00:04:22.309 TEST_HEADER include/spdk/nvmf_transport.h 00:04:22.309 TEST_HEADER include/spdk/opal.h 00:04:22.309 TEST_HEADER include/spdk/opal_spec.h 00:04:22.309 TEST_HEADER include/spdk/pci_ids.h 00:04:22.309 TEST_HEADER include/spdk/pipe.h 00:04:22.309 CC app/fio/bdev/fio_plugin.o 00:04:22.309 TEST_HEADER include/spdk/queue.h 00:04:22.309 TEST_HEADER include/spdk/reduce.h 00:04:22.309 TEST_HEADER include/spdk/rpc.h 00:04:22.309 TEST_HEADER include/spdk/scheduler.h 00:04:22.309 TEST_HEADER include/spdk/scsi.h 00:04:22.309 TEST_HEADER include/spdk/scsi_spec.h 00:04:22.309 TEST_HEADER include/spdk/sock.h 00:04:22.309 TEST_HEADER include/spdk/stdinc.h 00:04:22.309 TEST_HEADER include/spdk/string.h 00:04:22.309 TEST_HEADER include/spdk/thread.h 00:04:22.309 TEST_HEADER include/spdk/trace.h 00:04:22.309 TEST_HEADER include/spdk/trace_parser.h 00:04:22.568 TEST_HEADER include/spdk/tree.h 00:04:22.568 TEST_HEADER include/spdk/ublk.h 00:04:22.568 TEST_HEADER include/spdk/util.h 00:04:22.568 TEST_HEADER include/spdk/uuid.h 00:04:22.568 TEST_HEADER include/spdk/version.h 00:04:22.568 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:22.568 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:22.568 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:22.568 TEST_HEADER include/spdk/vhost.h 00:04:22.568 TEST_HEADER include/spdk/vmd.h 00:04:22.568 TEST_HEADER include/spdk/xor.h 00:04:22.568 TEST_HEADER include/spdk/zipf.h 00:04:22.568 CXX test/cpp_headers/accel.o 00:04:22.568 CC examples/sock/hello_world/hello_sock.o 00:04:22.568 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:22.568 CC examples/idxd/perf/perf.o 00:04:22.568 CC examples/vmd/lsvmd/lsvmd.o 00:04:22.568 CXX test/cpp_headers/accel_module.o 00:04:22.827 CC test/env/mem_callbacks/mem_callbacks.o 00:04:22.827 CC examples/accel/perf/accel_perf.o 00:04:22.827 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:22.827 LINK lsvmd 00:04:22.827 CXX test/cpp_headers/assert.o 00:04:22.827 LINK hello_sock 00:04:23.085 LINK spdk_bdev 00:04:23.085 CXX test/cpp_headers/barrier.o 00:04:23.085 CC examples/vmd/led/led.o 00:04:23.085 LINK idxd_perf 00:04:23.085 CXX test/cpp_headers/base64.o 00:04:23.085 LINK hello_fsdev 00:04:23.085 LINK vhost_fuzz 00:04:23.085 CXX test/cpp_headers/bdev.o 00:04:23.085 LINK led 00:04:23.344 CXX test/cpp_headers/bdev_module.o 00:04:23.344 CXX test/cpp_headers/bdev_zone.o 00:04:23.344 LINK mem_callbacks 00:04:23.344 CXX test/cpp_headers/bit_array.o 00:04:23.344 LINK accel_perf 00:04:23.344 CXX test/cpp_headers/bit_pool.o 00:04:23.344 CC examples/nvme/hello_world/hello_world.o 00:04:23.344 CC examples/blob/hello_world/hello_blob.o 00:04:23.602 CC examples/blob/cli/blobcli.o 00:04:23.602 CXX test/cpp_headers/blob_bdev.o 00:04:23.603 CXX test/cpp_headers/blobfs_bdev.o 00:04:23.603 CC examples/nvme/reconnect/reconnect.o 00:04:23.603 CC test/env/vtophys/vtophys.o 00:04:23.603 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:23.603 LINK hello_world 00:04:23.861 LINK hello_blob 00:04:23.861 CXX test/cpp_headers/blobfs.o 00:04:23.861 LINK vtophys 00:04:23.861 CC examples/bdev/hello_world/hello_bdev.o 00:04:23.861 LINK env_dpdk_post_init 00:04:23.861 CC examples/bdev/bdevperf/bdevperf.o 00:04:23.861 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:23.861 CXX test/cpp_headers/blob.o 00:04:24.120 LINK reconnect 00:04:24.120 CC examples/nvme/arbitration/arbitration.o 00:04:24.120 CC examples/nvme/hotplug/hotplug.o 00:04:24.120 LINK hello_bdev 00:04:24.120 LINK blobcli 00:04:24.120 CC test/env/memory/memory_ut.o 00:04:24.120 CXX test/cpp_headers/conf.o 00:04:24.379 CXX test/cpp_headers/config.o 00:04:24.379 CXX test/cpp_headers/cpuset.o 00:04:24.379 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:24.379 LINK hotplug 00:04:24.379 LINK iscsi_fuzz 00:04:24.379 CC examples/nvme/abort/abort.o 00:04:24.379 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:24.379 LINK arbitration 00:04:24.379 CXX test/cpp_headers/crc16.o 00:04:24.638 CXX test/cpp_headers/crc32.o 00:04:24.638 LINK cmb_copy 00:04:24.638 LINK nvme_manage 00:04:24.638 LINK pmr_persistence 00:04:24.638 CXX test/cpp_headers/crc64.o 00:04:24.897 CC test/event/event_perf/event_perf.o 00:04:24.897 CC test/rpc_client/rpc_client_test.o 00:04:24.897 CC test/nvme/aer/aer.o 00:04:24.897 CC test/event/reactor/reactor.o 00:04:24.897 LINK abort 00:04:24.897 CXX test/cpp_headers/dif.o 00:04:24.897 CC test/env/pci/pci_ut.o 00:04:24.897 LINK bdevperf 00:04:24.897 LINK event_perf 00:04:24.897 CC test/accel/dif/dif.o 00:04:24.897 LINK rpc_client_test 00:04:25.156 LINK reactor 00:04:25.156 CXX test/cpp_headers/dma.o 00:04:25.156 CXX test/cpp_headers/endian.o 00:04:25.156 LINK aer 00:04:25.156 CXX test/cpp_headers/env_dpdk.o 00:04:25.156 CC test/event/reactor_perf/reactor_perf.o 00:04:25.415 CC test/blobfs/mkfs/mkfs.o 00:04:25.415 CC test/lvol/esnap/esnap.o 00:04:25.415 CC examples/nvmf/nvmf/nvmf.o 00:04:25.415 LINK pci_ut 00:04:25.415 CXX test/cpp_headers/env.o 00:04:25.415 CC test/nvme/reset/reset.o 00:04:25.415 LINK reactor_perf 00:04:25.415 CC test/nvme/sgl/sgl.o 00:04:25.415 LINK memory_ut 00:04:25.674 LINK mkfs 00:04:25.674 CXX test/cpp_headers/event.o 00:04:25.674 CXX test/cpp_headers/fd_group.o 00:04:25.674 CC test/event/app_repeat/app_repeat.o 00:04:25.674 LINK reset 00:04:25.674 LINK nvmf 00:04:25.674 CC test/nvme/e2edp/nvme_dp.o 00:04:25.933 LINK sgl 00:04:25.933 CXX test/cpp_headers/fd.o 00:04:25.933 LINK dif 00:04:25.933 CC test/nvme/overhead/overhead.o 00:04:25.933 LINK app_repeat 00:04:25.933 CC test/nvme/err_injection/err_injection.o 00:04:25.933 CC test/nvme/startup/startup.o 00:04:25.933 CXX test/cpp_headers/file.o 00:04:26.192 CXX test/cpp_headers/fsdev.o 00:04:26.192 CC test/nvme/reserve/reserve.o 00:04:26.192 CC test/nvme/simple_copy/simple_copy.o 00:04:26.192 LINK nvme_dp 00:04:26.192 LINK err_injection 00:04:26.192 CC test/event/scheduler/scheduler.o 00:04:26.192 LINK startup 00:04:26.192 LINK overhead 00:04:26.192 CXX test/cpp_headers/fsdev_module.o 00:04:26.451 CC test/nvme/connect_stress/connect_stress.o 00:04:26.451 LINK reserve 00:04:26.451 CC test/nvme/boot_partition/boot_partition.o 00:04:26.451 CC test/nvme/compliance/nvme_compliance.o 00:04:26.451 LINK simple_copy 00:04:26.451 CC test/nvme/fused_ordering/fused_ordering.o 00:04:26.451 LINK scheduler 00:04:26.451 CXX test/cpp_headers/ftl.o 00:04:26.451 LINK connect_stress 00:04:26.710 LINK boot_partition 00:04:26.710 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:26.710 CC test/bdev/bdevio/bdevio.o 00:04:26.710 CC test/nvme/fdp/fdp.o 00:04:26.710 CXX test/cpp_headers/gpt_spec.o 00:04:26.710 CXX test/cpp_headers/hexlify.o 00:04:26.710 CXX test/cpp_headers/histogram_data.o 00:04:26.710 LINK fused_ordering 00:04:26.710 LINK nvme_compliance 00:04:26.968 LINK doorbell_aers 00:04:26.968 CC test/nvme/cuse/cuse.o 00:04:26.968 CXX test/cpp_headers/idxd.o 00:04:26.968 CXX test/cpp_headers/idxd_spec.o 00:04:26.968 CXX test/cpp_headers/init.o 00:04:26.968 CXX test/cpp_headers/ioat.o 00:04:26.968 CXX test/cpp_headers/ioat_spec.o 00:04:26.968 CXX test/cpp_headers/iscsi_spec.o 00:04:26.968 CXX test/cpp_headers/json.o 00:04:26.968 CXX test/cpp_headers/jsonrpc.o 00:04:26.968 CXX test/cpp_headers/keyring.o 00:04:26.968 CXX test/cpp_headers/keyring_module.o 00:04:27.227 LINK fdp 00:04:27.227 LINK bdevio 00:04:27.227 CXX test/cpp_headers/likely.o 00:04:27.227 CXX test/cpp_headers/log.o 00:04:27.227 CXX test/cpp_headers/lvol.o 00:04:27.227 CXX test/cpp_headers/md5.o 00:04:27.227 CXX test/cpp_headers/memory.o 00:04:27.227 CXX test/cpp_headers/mmio.o 00:04:27.227 CXX test/cpp_headers/nbd.o 00:04:27.227 CXX test/cpp_headers/net.o 00:04:27.227 CXX test/cpp_headers/notify.o 00:04:27.486 CXX test/cpp_headers/nvme.o 00:04:27.486 CXX test/cpp_headers/nvme_intel.o 00:04:27.486 CXX test/cpp_headers/nvme_ocssd.o 00:04:27.486 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:27.486 CXX test/cpp_headers/nvme_spec.o 00:04:27.486 CXX test/cpp_headers/nvme_zns.o 00:04:27.486 CXX test/cpp_headers/nvmf_cmd.o 00:04:27.486 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:27.486 CXX test/cpp_headers/nvmf.o 00:04:27.486 CXX test/cpp_headers/nvmf_spec.o 00:04:27.744 CXX test/cpp_headers/nvmf_transport.o 00:04:27.744 CXX test/cpp_headers/opal.o 00:04:27.744 CXX test/cpp_headers/opal_spec.o 00:04:27.744 CXX test/cpp_headers/pci_ids.o 00:04:27.744 CXX test/cpp_headers/pipe.o 00:04:27.744 CXX test/cpp_headers/queue.o 00:04:27.744 CXX test/cpp_headers/reduce.o 00:04:27.744 CXX test/cpp_headers/rpc.o 00:04:27.744 CXX test/cpp_headers/scheduler.o 00:04:27.744 CXX test/cpp_headers/scsi.o 00:04:27.744 CXX test/cpp_headers/scsi_spec.o 00:04:27.744 CXX test/cpp_headers/sock.o 00:04:27.744 CXX test/cpp_headers/stdinc.o 00:04:28.003 CXX test/cpp_headers/string.o 00:04:28.003 CXX test/cpp_headers/thread.o 00:04:28.003 CXX test/cpp_headers/trace.o 00:04:28.003 CXX test/cpp_headers/trace_parser.o 00:04:28.003 CXX test/cpp_headers/tree.o 00:04:28.003 CXX test/cpp_headers/ublk.o 00:04:28.003 CXX test/cpp_headers/util.o 00:04:28.003 CXX test/cpp_headers/uuid.o 00:04:28.003 CXX test/cpp_headers/version.o 00:04:28.003 CXX test/cpp_headers/vfio_user_pci.o 00:04:28.003 CXX test/cpp_headers/vfio_user_spec.o 00:04:28.003 CXX test/cpp_headers/vhost.o 00:04:28.003 CXX test/cpp_headers/vmd.o 00:04:28.262 CXX test/cpp_headers/xor.o 00:04:28.262 CXX test/cpp_headers/zipf.o 00:04:28.521 LINK cuse 00:04:32.711 LINK esnap 00:04:32.711 00:04:32.711 real 1m37.186s 00:04:32.711 user 9m15.156s 00:04:32.711 sys 1m39.205s 00:04:32.711 09:06:26 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:32.711 09:06:26 make -- common/autotest_common.sh@10 -- $ set +x 00:04:32.711 ************************************ 00:04:32.711 END TEST make 00:04:32.711 ************************************ 00:04:32.711 09:06:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:32.711 09:06:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:32.711 09:06:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:32.711 09:06:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.711 09:06:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:32.711 09:06:26 -- pm/common@44 -- $ pid=5290 00:04:32.711 09:06:26 -- pm/common@50 -- $ kill -TERM 5290 00:04:32.711 09:06:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.711 09:06:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:32.711 09:06:26 -- pm/common@44 -- $ pid=5292 00:04:32.711 09:06:26 -- pm/common@50 -- $ kill -TERM 5292 00:04:32.711 09:06:26 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:32.711 09:06:26 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:32.711 09:06:26 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.711 09:06:26 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.711 09:06:26 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.711 09:06:26 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.711 09:06:26 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.711 09:06:26 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.711 09:06:26 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.711 09:06:26 -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.711 09:06:26 -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.711 09:06:26 -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.711 09:06:26 -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.711 09:06:26 -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.711 09:06:26 -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.711 09:06:26 -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.711 09:06:26 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.711 09:06:26 -- scripts/common.sh@344 -- # case "$op" in 00:04:32.711 09:06:26 -- scripts/common.sh@345 -- # : 1 00:04:32.711 09:06:26 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.711 09:06:26 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.711 09:06:26 -- scripts/common.sh@365 -- # decimal 1 00:04:32.711 09:06:26 -- scripts/common.sh@353 -- # local d=1 00:04:32.711 09:06:26 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.711 09:06:26 -- scripts/common.sh@355 -- # echo 1 00:04:32.711 09:06:26 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.711 09:06:26 -- scripts/common.sh@366 -- # decimal 2 00:04:32.711 09:06:26 -- scripts/common.sh@353 -- # local d=2 00:04:32.711 09:06:26 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.711 09:06:26 -- scripts/common.sh@355 -- # echo 2 00:04:32.711 09:06:26 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.711 09:06:26 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.711 09:06:26 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.711 09:06:26 -- scripts/common.sh@368 -- # return 0 00:04:32.711 09:06:26 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.711 09:06:26 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.711 --rc genhtml_branch_coverage=1 00:04:32.711 --rc genhtml_function_coverage=1 00:04:32.711 --rc genhtml_legend=1 00:04:32.711 --rc geninfo_all_blocks=1 00:04:32.711 --rc geninfo_unexecuted_blocks=1 00:04:32.711 00:04:32.711 ' 00:04:32.711 09:06:26 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.711 --rc genhtml_branch_coverage=1 00:04:32.711 --rc genhtml_function_coverage=1 00:04:32.711 --rc genhtml_legend=1 00:04:32.711 --rc geninfo_all_blocks=1 00:04:32.711 --rc geninfo_unexecuted_blocks=1 00:04:32.711 00:04:32.711 ' 00:04:32.711 09:06:26 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.711 --rc genhtml_branch_coverage=1 00:04:32.712 --rc genhtml_function_coverage=1 00:04:32.712 --rc genhtml_legend=1 00:04:32.712 --rc geninfo_all_blocks=1 00:04:32.712 --rc geninfo_unexecuted_blocks=1 00:04:32.712 00:04:32.712 ' 00:04:32.712 09:06:26 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.712 --rc genhtml_branch_coverage=1 00:04:32.712 --rc genhtml_function_coverage=1 00:04:32.712 --rc genhtml_legend=1 00:04:32.712 --rc geninfo_all_blocks=1 00:04:32.712 --rc geninfo_unexecuted_blocks=1 00:04:32.712 00:04:32.712 ' 00:04:32.712 09:06:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.712 09:06:26 -- nvmf/common.sh@7 -- # uname -s 00:04:32.712 09:06:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.712 09:06:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.712 09:06:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.712 09:06:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.712 09:06:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.712 09:06:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.712 09:06:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.712 09:06:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.712 09:06:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.712 09:06:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.712 09:06:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:04:32.712 09:06:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:04:32.712 09:06:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.712 09:06:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.712 09:06:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:32.712 09:06:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.712 09:06:26 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.712 09:06:26 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.712 09:06:26 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.712 09:06:26 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.712 09:06:26 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.712 09:06:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.712 09:06:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.712 09:06:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.712 09:06:26 -- paths/export.sh@5 -- # export PATH 00:04:32.712 09:06:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.712 09:06:26 -- nvmf/common.sh@51 -- # : 0 00:04:32.712 09:06:26 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.712 09:06:26 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.712 09:06:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.712 09:06:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.712 09:06:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.712 09:06:26 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.712 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.712 09:06:26 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.712 09:06:26 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.712 09:06:26 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.712 09:06:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:32.712 09:06:26 -- spdk/autotest.sh@32 -- # uname -s 00:04:32.712 09:06:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:32.712 09:06:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:32.712 09:06:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:32.712 09:06:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:32.712 09:06:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:32.712 09:06:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:32.712 09:06:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:32.712 09:06:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:32.712 09:06:26 -- spdk/autotest.sh@48 -- # udevadm_pid=56858 00:04:32.712 09:06:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:32.712 09:06:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:32.712 09:06:26 -- pm/common@17 -- # local monitor 00:04:32.712 09:06:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.712 09:06:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.712 09:06:26 -- pm/common@25 -- # sleep 1 00:04:32.712 09:06:26 -- pm/common@21 -- # date +%s 00:04:32.712 09:06:26 -- pm/common@21 -- # date +%s 00:04:32.712 09:06:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734080786 00:04:32.712 09:06:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734080786 00:04:32.712 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734080786_collect-cpu-load.pm.log 00:04:32.712 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734080786_collect-vmstat.pm.log 00:04:33.649 09:06:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:33.649 09:06:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:33.649 09:06:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.649 09:06:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.649 09:06:27 -- spdk/autotest.sh@59 -- # create_test_list 00:04:33.649 09:06:27 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:33.649 09:06:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.908 09:06:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:33.908 09:06:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:33.908 09:06:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:33.908 09:06:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:33.908 09:06:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:33.908 09:06:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:33.908 09:06:27 -- common/autotest_common.sh@1457 -- # uname 00:04:33.908 09:06:27 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:33.908 09:06:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:33.908 09:06:27 -- common/autotest_common.sh@1477 -- # uname 00:04:33.908 09:06:27 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:33.908 09:06:27 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:33.908 09:06:27 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:33.908 lcov: LCOV version 1.15 00:04:33.908 09:06:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:48.829 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:48.829 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:06.915 09:06:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:06.915 09:06:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.915 09:06:59 -- common/autotest_common.sh@10 -- # set +x 00:05:06.915 09:06:59 -- spdk/autotest.sh@78 -- # rm -f 00:05:06.915 09:06:59 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.915 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:06.915 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:06.915 09:06:59 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:06.915 09:06:59 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:06.915 09:06:59 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:06.915 09:06:59 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:06.915 09:06:59 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:06.915 09:06:59 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:06.915 09:06:59 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:06.915 09:06:59 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:06.915 09:06:59 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.915 09:06:59 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:06.915 09:06:59 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:06.915 09:06:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.915 09:06:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.915 09:06:59 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:06.915 09:06:59 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:06.915 09:06:59 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.915 09:06:59 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:06.915 09:06:59 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:06.915 09:06:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:06.915 09:06:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.915 09:06:59 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.915 09:06:59 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:06.916 09:06:59 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:06.916 09:06:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:06.916 09:06:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.916 09:06:59 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.916 09:06:59 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:06.916 09:06:59 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:06.916 09:06:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:06.916 09:06:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.916 09:06:59 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:06.916 09:06:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.916 09:06:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.916 09:06:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:06.916 09:06:59 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:06.916 09:06:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:06.916 No valid GPT data, bailing 00:05:06.916 09:06:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:06.916 09:06:59 -- scripts/common.sh@394 -- # pt= 00:05:06.916 09:06:59 -- scripts/common.sh@395 -- # return 1 00:05:06.916 09:06:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:06.916 1+0 records in 00:05:06.916 1+0 records out 00:05:06.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451247 s, 232 MB/s 00:05:06.916 09:06:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.916 09:06:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.916 09:06:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:06.916 09:06:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:06.916 09:06:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:06.916 No valid GPT data, bailing 00:05:06.916 09:06:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:06.916 09:06:59 -- scripts/common.sh@394 -- # pt= 00:05:06.916 09:06:59 -- scripts/common.sh@395 -- # return 1 00:05:06.916 09:06:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:06.916 1+0 records in 00:05:06.916 1+0 records out 00:05:06.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474968 s, 221 MB/s 00:05:06.916 09:06:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.916 09:06:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.916 09:06:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:06.916 09:06:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:06.916 09:06:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:06.916 No valid GPT data, bailing 00:05:06.916 09:06:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:06.916 09:06:59 -- scripts/common.sh@394 -- # pt= 00:05:06.916 09:06:59 -- scripts/common.sh@395 -- # return 1 00:05:06.916 09:06:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:06.916 1+0 records in 00:05:06.916 1+0 records out 00:05:06.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427621 s, 245 MB/s 00:05:06.916 09:06:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.916 09:06:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.916 09:06:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:06.916 09:06:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:06.916 09:06:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:06.916 No valid GPT data, bailing 00:05:06.916 09:07:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:06.916 09:07:00 -- scripts/common.sh@394 -- # pt= 00:05:06.916 09:07:00 -- scripts/common.sh@395 -- # return 1 00:05:06.916 09:07:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:06.916 1+0 records in 00:05:06.916 1+0 records out 00:05:06.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452493 s, 232 MB/s 00:05:06.916 09:07:00 -- spdk/autotest.sh@105 -- # sync 00:05:06.916 09:07:00 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:06.916 09:07:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:06.916 09:07:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:08.293 09:07:02 -- spdk/autotest.sh@111 -- # uname -s 00:05:08.293 09:07:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:08.293 09:07:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:08.293 09:07:02 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:08.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.860 Hugepages 00:05:08.860 node hugesize free / total 00:05:08.860 node0 1048576kB 0 / 0 00:05:08.860 node0 2048kB 0 / 0 00:05:08.860 00:05:08.860 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:09.117 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:09.117 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:09.117 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:09.117 09:07:02 -- spdk/autotest.sh@117 -- # uname -s 00:05:09.117 09:07:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:09.117 09:07:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:09.117 09:07:02 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.684 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.943 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.943 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.943 09:07:03 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:11.324 09:07:04 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:11.324 09:07:04 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:11.324 09:07:04 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:11.324 09:07:04 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:11.324 09:07:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:11.324 09:07:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:11.324 09:07:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.324 09:07:04 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:11.324 09:07:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.324 09:07:04 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:11.324 09:07:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:11.324 09:07:04 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.324 Waiting for block devices as requested 00:05:11.584 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.584 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.584 09:07:05 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:11.584 09:07:05 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:11.584 09:07:05 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:11.584 09:07:05 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:11.584 09:07:05 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:11.584 09:07:05 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:11.584 09:07:05 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:11.584 09:07:05 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:11.584 09:07:05 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:11.584 09:07:05 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:11.584 09:07:05 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:11.584 09:07:05 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:11.584 09:07:05 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:11.584 09:07:05 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:11.584 09:07:05 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:11.584 09:07:05 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:11.584 09:07:05 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:11.584 09:07:05 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:11.584 09:07:05 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:11.584 09:07:05 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:11.584 09:07:05 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:11.584 09:07:05 -- common/autotest_common.sh@1543 -- # continue 00:05:11.584 09:07:05 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:11.843 09:07:05 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:11.843 09:07:05 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:11.843 09:07:05 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:11.843 09:07:05 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:11.843 09:07:05 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:11.843 09:07:05 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:11.843 09:07:05 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:11.843 09:07:05 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:11.843 09:07:05 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:11.843 09:07:05 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:11.843 09:07:05 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:11.843 09:07:05 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:11.843 09:07:05 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:11.843 09:07:05 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:11.843 09:07:05 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:11.843 09:07:05 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:11.843 09:07:05 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:11.843 09:07:05 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:11.843 09:07:05 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:11.843 09:07:05 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:11.843 09:07:05 -- common/autotest_common.sh@1543 -- # continue 00:05:11.843 09:07:05 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:11.843 09:07:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.843 09:07:05 -- common/autotest_common.sh@10 -- # set +x 00:05:11.843 09:07:05 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:11.843 09:07:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.843 09:07:05 -- common/autotest_common.sh@10 -- # set +x 00:05:11.843 09:07:05 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.410 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.669 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.669 09:07:06 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:12.669 09:07:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.669 09:07:06 -- common/autotest_common.sh@10 -- # set +x 00:05:12.669 09:07:06 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:12.669 09:07:06 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:12.669 09:07:06 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:12.669 09:07:06 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:12.669 09:07:06 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:12.669 09:07:06 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:12.669 09:07:06 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:12.669 09:07:06 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:12.669 09:07:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:12.669 09:07:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:12.669 09:07:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.669 09:07:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:12.669 09:07:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:12.669 09:07:06 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:12.669 09:07:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:12.669 09:07:06 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:12.669 09:07:06 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:12.669 09:07:06 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:12.669 09:07:06 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:12.669 09:07:06 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:12.669 09:07:06 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:12.669 09:07:06 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:12.669 09:07:06 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:12.669 09:07:06 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:12.669 09:07:06 -- common/autotest_common.sh@1572 -- # return 0 00:05:12.669 09:07:06 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:12.669 09:07:06 -- common/autotest_common.sh@1580 -- # return 0 00:05:12.669 09:07:06 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:12.669 09:07:06 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:12.669 09:07:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:12.669 09:07:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:12.669 09:07:06 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:12.669 09:07:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.669 09:07:06 -- common/autotest_common.sh@10 -- # set +x 00:05:12.669 09:07:06 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:12.669 09:07:06 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:12.669 09:07:06 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:12.669 09:07:06 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:12.669 09:07:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.669 09:07:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.669 09:07:06 -- common/autotest_common.sh@10 -- # set +x 00:05:12.669 ************************************ 00:05:12.669 START TEST env 00:05:12.669 ************************************ 00:05:12.669 09:07:06 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:12.928 * Looking for test storage... 00:05:12.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.928 09:07:06 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.928 09:07:06 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.928 09:07:06 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.928 09:07:06 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.928 09:07:06 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.928 09:07:06 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.928 09:07:06 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.928 09:07:06 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.928 09:07:06 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.928 09:07:06 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.928 09:07:06 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.928 09:07:06 env -- scripts/common.sh@344 -- # case "$op" in 00:05:12.928 09:07:06 env -- scripts/common.sh@345 -- # : 1 00:05:12.928 09:07:06 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.928 09:07:06 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.928 09:07:06 env -- scripts/common.sh@365 -- # decimal 1 00:05:12.928 09:07:06 env -- scripts/common.sh@353 -- # local d=1 00:05:12.928 09:07:06 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.928 09:07:06 env -- scripts/common.sh@355 -- # echo 1 00:05:12.928 09:07:06 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.928 09:07:06 env -- scripts/common.sh@366 -- # decimal 2 00:05:12.928 09:07:06 env -- scripts/common.sh@353 -- # local d=2 00:05:12.928 09:07:06 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.928 09:07:06 env -- scripts/common.sh@355 -- # echo 2 00:05:12.928 09:07:06 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.928 09:07:06 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.928 09:07:06 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.928 09:07:06 env -- scripts/common.sh@368 -- # return 0 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.928 --rc genhtml_branch_coverage=1 00:05:12.928 --rc genhtml_function_coverage=1 00:05:12.928 --rc genhtml_legend=1 00:05:12.928 --rc geninfo_all_blocks=1 00:05:12.928 --rc geninfo_unexecuted_blocks=1 00:05:12.928 00:05:12.928 ' 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.928 --rc genhtml_branch_coverage=1 00:05:12.928 --rc genhtml_function_coverage=1 00:05:12.928 --rc genhtml_legend=1 00:05:12.928 --rc geninfo_all_blocks=1 00:05:12.928 --rc geninfo_unexecuted_blocks=1 00:05:12.928 00:05:12.928 ' 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.928 --rc genhtml_branch_coverage=1 00:05:12.928 --rc genhtml_function_coverage=1 00:05:12.928 --rc genhtml_legend=1 00:05:12.928 --rc geninfo_all_blocks=1 00:05:12.928 --rc geninfo_unexecuted_blocks=1 00:05:12.928 00:05:12.928 ' 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.928 --rc genhtml_branch_coverage=1 00:05:12.928 --rc genhtml_function_coverage=1 00:05:12.928 --rc genhtml_legend=1 00:05:12.928 --rc geninfo_all_blocks=1 00:05:12.928 --rc geninfo_unexecuted_blocks=1 00:05:12.928 00:05:12.928 ' 00:05:12.928 09:07:06 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.928 09:07:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.928 09:07:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.928 ************************************ 00:05:12.928 START TEST env_memory 00:05:12.928 ************************************ 00:05:12.928 09:07:06 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:12.928 00:05:12.928 00:05:12.928 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.928 http://cunit.sourceforge.net/ 00:05:12.928 00:05:12.928 00:05:12.928 Suite: memory 00:05:12.928 Test: alloc and free memory map ...[2024-12-13 09:07:06.783660] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:13.187 passed 00:05:13.187 Test: mem map translation ...[2024-12-13 09:07:06.846362] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:13.187 [2024-12-13 09:07:06.846428] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:13.187 [2024-12-13 09:07:06.846527] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:13.187 [2024-12-13 09:07:06.846554] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:13.187 passed 00:05:13.187 Test: mem map registration ...[2024-12-13 09:07:06.944543] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:13.187 [2024-12-13 09:07:06.944606] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:13.187 passed 00:05:13.446 Test: mem map adjacent registrations ...passed 00:05:13.446 00:05:13.446 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.446 suites 1 1 n/a 0 0 00:05:13.446 tests 4 4 4 0 0 00:05:13.446 asserts 152 152 152 0 n/a 00:05:13.446 00:05:13.446 Elapsed time = 0.354 seconds 00:05:13.446 00:05:13.446 real 0m0.397s 00:05:13.446 user 0m0.364s 00:05:13.446 sys 0m0.024s 00:05:13.446 09:07:07 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.446 09:07:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:13.446 ************************************ 00:05:13.446 END TEST env_memory 00:05:13.446 ************************************ 00:05:13.446 09:07:07 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:13.446 09:07:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.446 09:07:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.446 09:07:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.446 ************************************ 00:05:13.446 START TEST env_vtophys 00:05:13.446 ************************************ 00:05:13.446 09:07:07 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:13.446 EAL: lib.eal log level changed from notice to debug 00:05:13.446 EAL: Detected lcore 0 as core 0 on socket 0 00:05:13.446 EAL: Detected lcore 1 as core 0 on socket 0 00:05:13.446 EAL: Detected lcore 2 as core 0 on socket 0 00:05:13.446 EAL: Detected lcore 3 as core 0 on socket 0 00:05:13.446 EAL: Detected lcore 4 as core 0 on socket 0 00:05:13.446 EAL: Detected lcore 5 as core 0 on socket 0 00:05:13.446 EAL: Detected lcore 6 as core 0 on socket 0 00:05:13.446 EAL: Detected lcore 7 as core 0 on socket 0 00:05:13.446 EAL: Detected lcore 8 as core 0 on socket 0 00:05:13.446 EAL: Detected lcore 9 as core 0 on socket 0 00:05:13.447 EAL: Maximum logical cores by configuration: 128 00:05:13.447 EAL: Detected CPU lcores: 10 00:05:13.447 EAL: Detected NUMA nodes: 1 00:05:13.447 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:13.447 EAL: Detected shared linkage of DPDK 00:05:13.447 EAL: No shared files mode enabled, IPC will be disabled 00:05:13.447 EAL: Selected IOVA mode 'PA' 00:05:13.447 EAL: Probing VFIO support... 00:05:13.447 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:13.447 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:13.447 EAL: Ask a virtual area of 0x2e000 bytes 00:05:13.447 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:13.447 EAL: Setting up physically contiguous memory... 00:05:13.447 EAL: Setting maximum number of open files to 524288 00:05:13.447 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:13.447 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:13.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.447 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:13.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.447 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:13.447 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:13.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.447 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:13.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.447 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:13.447 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:13.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.447 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:13.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.447 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:13.447 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:13.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.447 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:13.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.447 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:13.447 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:13.447 EAL: Hugepages will be freed exactly as allocated. 00:05:13.447 EAL: No shared files mode enabled, IPC is disabled 00:05:13.447 EAL: No shared files mode enabled, IPC is disabled 00:05:13.706 EAL: TSC frequency is ~2200000 KHz 00:05:13.706 EAL: Main lcore 0 is ready (tid=7f3418611a40;cpuset=[0]) 00:05:13.706 EAL: Trying to obtain current memory policy. 00:05:13.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.706 EAL: Restoring previous memory policy: 0 00:05:13.706 EAL: request: mp_malloc_sync 00:05:13.706 EAL: No shared files mode enabled, IPC is disabled 00:05:13.706 EAL: Heap on socket 0 was expanded by 2MB 00:05:13.706 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:13.706 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:13.706 EAL: Mem event callback 'spdk:(nil)' registered 00:05:13.706 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:13.706 00:05:13.706 00:05:13.706 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.706 http://cunit.sourceforge.net/ 00:05:13.706 00:05:13.706 00:05:13.706 Suite: components_suite 00:05:13.966 Test: vtophys_malloc_test ...passed 00:05:13.966 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:13.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.966 EAL: Restoring previous memory policy: 4 00:05:13.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.966 EAL: request: mp_malloc_sync 00:05:13.966 EAL: No shared files mode enabled, IPC is disabled 00:05:13.966 EAL: Heap on socket 0 was expanded by 4MB 00:05:13.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.966 EAL: request: mp_malloc_sync 00:05:13.966 EAL: No shared files mode enabled, IPC is disabled 00:05:13.966 EAL: Heap on socket 0 was shrunk by 4MB 00:05:13.966 EAL: Trying to obtain current memory policy. 00:05:13.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.966 EAL: Restoring previous memory policy: 4 00:05:13.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.966 EAL: request: mp_malloc_sync 00:05:13.966 EAL: No shared files mode enabled, IPC is disabled 00:05:13.966 EAL: Heap on socket 0 was expanded by 6MB 00:05:13.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.966 EAL: request: mp_malloc_sync 00:05:13.966 EAL: No shared files mode enabled, IPC is disabled 00:05:13.966 EAL: Heap on socket 0 was shrunk by 6MB 00:05:13.966 EAL: Trying to obtain current memory policy. 00:05:13.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.966 EAL: Restoring previous memory policy: 4 00:05:13.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.966 EAL: request: mp_malloc_sync 00:05:13.966 EAL: No shared files mode enabled, IPC is disabled 00:05:13.966 EAL: Heap on socket 0 was expanded by 10MB 00:05:13.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.966 EAL: request: mp_malloc_sync 00:05:13.966 EAL: No shared files mode enabled, IPC is disabled 00:05:13.966 EAL: Heap on socket 0 was shrunk by 10MB 00:05:13.966 EAL: Trying to obtain current memory policy. 00:05:13.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.966 EAL: Restoring previous memory policy: 4 00:05:13.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.966 EAL: request: mp_malloc_sync 00:05:13.966 EAL: No shared files mode enabled, IPC is disabled 00:05:13.966 EAL: Heap on socket 0 was expanded by 18MB 00:05:13.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.966 EAL: request: mp_malloc_sync 00:05:13.966 EAL: No shared files mode enabled, IPC is disabled 00:05:13.966 EAL: Heap on socket 0 was shrunk by 18MB 00:05:14.226 EAL: Trying to obtain current memory policy. 00:05:14.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.226 EAL: Restoring previous memory policy: 4 00:05:14.226 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.226 EAL: request: mp_malloc_sync 00:05:14.226 EAL: No shared files mode enabled, IPC is disabled 00:05:14.226 EAL: Heap on socket 0 was expanded by 34MB 00:05:14.226 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.226 EAL: request: mp_malloc_sync 00:05:14.226 EAL: No shared files mode enabled, IPC is disabled 00:05:14.226 EAL: Heap on socket 0 was shrunk by 34MB 00:05:14.226 EAL: Trying to obtain current memory policy. 00:05:14.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.226 EAL: Restoring previous memory policy: 4 00:05:14.226 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.226 EAL: request: mp_malloc_sync 00:05:14.226 EAL: No shared files mode enabled, IPC is disabled 00:05:14.226 EAL: Heap on socket 0 was expanded by 66MB 00:05:14.226 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.226 EAL: request: mp_malloc_sync 00:05:14.226 EAL: No shared files mode enabled, IPC is disabled 00:05:14.226 EAL: Heap on socket 0 was shrunk by 66MB 00:05:14.485 EAL: Trying to obtain current memory policy. 00:05:14.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.485 EAL: Restoring previous memory policy: 4 00:05:14.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.485 EAL: request: mp_malloc_sync 00:05:14.485 EAL: No shared files mode enabled, IPC is disabled 00:05:14.485 EAL: Heap on socket 0 was expanded by 130MB 00:05:14.485 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.485 EAL: request: mp_malloc_sync 00:05:14.485 EAL: No shared files mode enabled, IPC is disabled 00:05:14.485 EAL: Heap on socket 0 was shrunk by 130MB 00:05:14.744 EAL: Trying to obtain current memory policy. 00:05:14.744 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.744 EAL: Restoring previous memory policy: 4 00:05:14.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.745 EAL: request: mp_malloc_sync 00:05:14.745 EAL: No shared files mode enabled, IPC is disabled 00:05:14.745 EAL: Heap on socket 0 was expanded by 258MB 00:05:15.004 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.004 EAL: request: mp_malloc_sync 00:05:15.004 EAL: No shared files mode enabled, IPC is disabled 00:05:15.004 EAL: Heap on socket 0 was shrunk by 258MB 00:05:15.263 EAL: Trying to obtain current memory policy. 00:05:15.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.522 EAL: Restoring previous memory policy: 4 00:05:15.522 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.522 EAL: request: mp_malloc_sync 00:05:15.522 EAL: No shared files mode enabled, IPC is disabled 00:05:15.522 EAL: Heap on socket 0 was expanded by 514MB 00:05:16.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.088 EAL: request: mp_malloc_sync 00:05:16.088 EAL: No shared files mode enabled, IPC is disabled 00:05:16.088 EAL: Heap on socket 0 was shrunk by 514MB 00:05:16.655 EAL: Trying to obtain current memory policy. 00:05:16.655 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.914 EAL: Restoring previous memory policy: 4 00:05:16.914 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.914 EAL: request: mp_malloc_sync 00:05:16.914 EAL: No shared files mode enabled, IPC is disabled 00:05:16.914 EAL: Heap on socket 0 was expanded by 1026MB 00:05:18.302 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.302 EAL: request: mp_malloc_sync 00:05:18.302 EAL: No shared files mode enabled, IPC is disabled 00:05:18.302 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:19.265 passed 00:05:19.265 00:05:19.265 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.265 suites 1 1 n/a 0 0 00:05:19.265 tests 2 2 2 0 0 00:05:19.265 asserts 5754 5754 5754 0 n/a 00:05:19.265 00:05:19.265 Elapsed time = 5.690 seconds 00:05:19.265 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.265 EAL: request: mp_malloc_sync 00:05:19.265 EAL: No shared files mode enabled, IPC is disabled 00:05:19.265 EAL: Heap on socket 0 was shrunk by 2MB 00:05:19.265 EAL: No shared files mode enabled, IPC is disabled 00:05:19.265 EAL: No shared files mode enabled, IPC is disabled 00:05:19.265 EAL: No shared files mode enabled, IPC is disabled 00:05:19.523 00:05:19.523 real 0m6.014s 00:05:19.523 user 0m5.192s 00:05:19.523 sys 0m0.673s 00:05:19.523 09:07:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.523 09:07:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:19.524 ************************************ 00:05:19.524 END TEST env_vtophys 00:05:19.524 ************************************ 00:05:19.524 09:07:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:19.524 09:07:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.524 09:07:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.524 09:07:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.524 ************************************ 00:05:19.524 START TEST env_pci 00:05:19.524 ************************************ 00:05:19.524 09:07:13 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:19.524 00:05:19.524 00:05:19.524 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.524 http://cunit.sourceforge.net/ 00:05:19.524 00:05:19.524 00:05:19.524 Suite: pci 00:05:19.524 Test: pci_hook ...[2024-12-13 09:07:13.262450] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59144 has claimed it 00:05:19.524 passed 00:05:19.524 00:05:19.524 EAL: Cannot find device (10000:00:01.0) 00:05:19.524 EAL: Failed to attach device on primary process 00:05:19.524 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.524 suites 1 1 n/a 0 0 00:05:19.524 tests 1 1 1 0 0 00:05:19.524 asserts 25 25 25 0 n/a 00:05:19.524 00:05:19.524 Elapsed time = 0.008 seconds 00:05:19.524 00:05:19.524 real 0m0.082s 00:05:19.524 user 0m0.044s 00:05:19.524 sys 0m0.037s 00:05:19.524 09:07:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.524 09:07:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:19.524 ************************************ 00:05:19.524 END TEST env_pci 00:05:19.524 ************************************ 00:05:19.524 09:07:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:19.524 09:07:13 env -- env/env.sh@15 -- # uname 00:05:19.524 09:07:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:19.524 09:07:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:19.524 09:07:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.524 09:07:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:19.524 09:07:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.524 09:07:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.524 ************************************ 00:05:19.524 START TEST env_dpdk_post_init 00:05:19.524 ************************************ 00:05:19.524 09:07:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.783 EAL: Detected CPU lcores: 10 00:05:19.783 EAL: Detected NUMA nodes: 1 00:05:19.783 EAL: Detected shared linkage of DPDK 00:05:19.783 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.783 EAL: Selected IOVA mode 'PA' 00:05:19.783 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.783 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:19.783 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:19.783 Starting DPDK initialization... 00:05:19.783 Starting SPDK post initialization... 00:05:19.783 SPDK NVMe probe 00:05:19.783 Attaching to 0000:00:10.0 00:05:19.783 Attaching to 0000:00:11.0 00:05:19.783 Attached to 0000:00:10.0 00:05:19.783 Attached to 0000:00:11.0 00:05:19.783 Cleaning up... 00:05:19.783 00:05:19.783 real 0m0.252s 00:05:19.783 user 0m0.081s 00:05:19.783 sys 0m0.071s 00:05:19.783 09:07:13 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.783 09:07:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.783 ************************************ 00:05:19.783 END TEST env_dpdk_post_init 00:05:19.783 ************************************ 00:05:19.783 09:07:13 env -- env/env.sh@26 -- # uname 00:05:19.783 09:07:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:19.783 09:07:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:19.783 09:07:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.783 09:07:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.783 09:07:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.042 ************************************ 00:05:20.042 START TEST env_mem_callbacks 00:05:20.042 ************************************ 00:05:20.042 09:07:13 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.042 EAL: Detected CPU lcores: 10 00:05:20.042 EAL: Detected NUMA nodes: 1 00:05:20.042 EAL: Detected shared linkage of DPDK 00:05:20.042 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.042 EAL: Selected IOVA mode 'PA' 00:05:20.042 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.042 00:05:20.042 00:05:20.042 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.042 http://cunit.sourceforge.net/ 00:05:20.042 00:05:20.042 00:05:20.042 Suite: memory 00:05:20.042 Test: test ... 00:05:20.042 register 0x200000200000 2097152 00:05:20.042 malloc 3145728 00:05:20.042 register 0x200000400000 4194304 00:05:20.042 buf 0x2000004fffc0 len 3145728 PASSED 00:05:20.042 malloc 64 00:05:20.042 buf 0x2000004ffec0 len 64 PASSED 00:05:20.042 malloc 4194304 00:05:20.042 register 0x200000800000 6291456 00:05:20.042 buf 0x2000009fffc0 len 4194304 PASSED 00:05:20.042 free 0x2000004fffc0 3145728 00:05:20.042 free 0x2000004ffec0 64 00:05:20.042 unregister 0x200000400000 4194304 PASSED 00:05:20.042 free 0x2000009fffc0 4194304 00:05:20.042 unregister 0x200000800000 6291456 PASSED 00:05:20.042 malloc 8388608 00:05:20.042 register 0x200000400000 10485760 00:05:20.042 buf 0x2000005fffc0 len 8388608 PASSED 00:05:20.042 free 0x2000005fffc0 8388608 00:05:20.042 unregister 0x200000400000 10485760 PASSED 00:05:20.042 passed 00:05:20.042 00:05:20.042 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.042 suites 1 1 n/a 0 0 00:05:20.042 tests 1 1 1 0 0 00:05:20.042 asserts 15 15 15 0 n/a 00:05:20.042 00:05:20.042 Elapsed time = 0.056 seconds 00:05:20.042 00:05:20.042 real 0m0.232s 00:05:20.042 user 0m0.080s 00:05:20.042 sys 0m0.050s 00:05:20.042 09:07:13 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.042 09:07:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:20.042 ************************************ 00:05:20.042 END TEST env_mem_callbacks 00:05:20.042 ************************************ 00:05:20.300 00:05:20.300 real 0m7.441s 00:05:20.300 user 0m5.954s 00:05:20.300 sys 0m1.109s 00:05:20.300 09:07:13 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.300 09:07:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.300 ************************************ 00:05:20.300 END TEST env 00:05:20.300 ************************************ 00:05:20.300 09:07:13 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:20.300 09:07:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.300 09:07:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.300 09:07:13 -- common/autotest_common.sh@10 -- # set +x 00:05:20.300 ************************************ 00:05:20.300 START TEST rpc 00:05:20.300 ************************************ 00:05:20.300 09:07:14 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:20.300 * Looking for test storage... 00:05:20.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.300 09:07:14 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.300 09:07:14 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.300 09:07:14 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.300 09:07:14 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.300 09:07:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.300 09:07:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.300 09:07:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.300 09:07:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.300 09:07:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.300 09:07:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.300 09:07:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.300 09:07:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.300 09:07:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.300 09:07:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.300 09:07:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.300 09:07:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.300 09:07:14 rpc -- scripts/common.sh@345 -- # : 1 00:05:20.300 09:07:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.300 09:07:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.300 09:07:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.300 09:07:14 rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.300 09:07:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.300 09:07:14 rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.300 09:07:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.300 09:07:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.300 09:07:14 rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.300 09:07:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.300 09:07:14 rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.558 09:07:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.558 09:07:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.558 09:07:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.558 09:07:14 rpc -- scripts/common.sh@368 -- # return 0 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.558 --rc genhtml_branch_coverage=1 00:05:20.558 --rc genhtml_function_coverage=1 00:05:20.558 --rc genhtml_legend=1 00:05:20.558 --rc geninfo_all_blocks=1 00:05:20.558 --rc geninfo_unexecuted_blocks=1 00:05:20.558 00:05:20.558 ' 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.558 --rc genhtml_branch_coverage=1 00:05:20.558 --rc genhtml_function_coverage=1 00:05:20.558 --rc genhtml_legend=1 00:05:20.558 --rc geninfo_all_blocks=1 00:05:20.558 --rc geninfo_unexecuted_blocks=1 00:05:20.558 00:05:20.558 ' 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.558 --rc genhtml_branch_coverage=1 00:05:20.558 --rc genhtml_function_coverage=1 00:05:20.558 --rc genhtml_legend=1 00:05:20.558 --rc geninfo_all_blocks=1 00:05:20.558 --rc geninfo_unexecuted_blocks=1 00:05:20.558 00:05:20.558 ' 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.558 --rc genhtml_branch_coverage=1 00:05:20.558 --rc genhtml_function_coverage=1 00:05:20.558 --rc genhtml_legend=1 00:05:20.558 --rc geninfo_all_blocks=1 00:05:20.558 --rc geninfo_unexecuted_blocks=1 00:05:20.558 00:05:20.558 ' 00:05:20.558 09:07:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59271 00:05:20.558 09:07:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:20.558 09:07:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.558 09:07:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59271 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@835 -- # '[' -z 59271 ']' 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.558 09:07:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.558 [2024-12-13 09:07:14.334348] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:20.558 [2024-12-13 09:07:14.334993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:05:20.817 [2024-12-13 09:07:14.518194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.817 [2024-12-13 09:07:14.597657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:20.817 [2024-12-13 09:07:14.597752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59271' to capture a snapshot of events at runtime. 00:05:20.817 [2024-12-13 09:07:14.597768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:20.817 [2024-12-13 09:07:14.597781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:20.817 [2024-12-13 09:07:14.597791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59271 for offline analysis/debug. 00:05:20.817 [2024-12-13 09:07:14.598923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.075 [2024-12-13 09:07:14.787940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.640 09:07:15 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.640 09:07:15 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.640 09:07:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:21.640 09:07:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:21.640 09:07:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:21.640 09:07:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:21.640 09:07:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.640 09:07:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.640 09:07:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.640 ************************************ 00:05:21.640 START TEST rpc_integrity 00:05:21.640 ************************************ 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.640 { 00:05:21.640 "name": "Malloc0", 00:05:21.640 "aliases": [ 00:05:21.640 "0717b664-e42e-4863-887d-9f4b36b7ca78" 00:05:21.640 ], 00:05:21.640 "product_name": "Malloc disk", 00:05:21.640 "block_size": 512, 00:05:21.640 "num_blocks": 16384, 00:05:21.640 "uuid": "0717b664-e42e-4863-887d-9f4b36b7ca78", 00:05:21.640 "assigned_rate_limits": { 00:05:21.640 "rw_ios_per_sec": 0, 00:05:21.640 "rw_mbytes_per_sec": 0, 00:05:21.640 "r_mbytes_per_sec": 0, 00:05:21.640 "w_mbytes_per_sec": 0 00:05:21.640 }, 00:05:21.640 "claimed": false, 00:05:21.640 "zoned": false, 00:05:21.640 "supported_io_types": { 00:05:21.640 "read": true, 00:05:21.640 "write": true, 00:05:21.640 "unmap": true, 00:05:21.640 "flush": true, 00:05:21.640 "reset": true, 00:05:21.640 "nvme_admin": false, 00:05:21.640 "nvme_io": false, 00:05:21.640 "nvme_io_md": false, 00:05:21.640 "write_zeroes": true, 00:05:21.640 "zcopy": true, 00:05:21.640 "get_zone_info": false, 00:05:21.640 "zone_management": false, 00:05:21.640 "zone_append": false, 00:05:21.640 "compare": false, 00:05:21.640 "compare_and_write": false, 00:05:21.640 "abort": true, 00:05:21.640 "seek_hole": false, 00:05:21.640 "seek_data": false, 00:05:21.640 "copy": true, 00:05:21.640 "nvme_iov_md": false 00:05:21.640 }, 00:05:21.640 "memory_domains": [ 00:05:21.640 { 00:05:21.640 "dma_device_id": "system", 00:05:21.640 "dma_device_type": 1 00:05:21.640 }, 00:05:21.640 { 00:05:21.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.640 "dma_device_type": 2 00:05:21.640 } 00:05:21.640 ], 00:05:21.640 "driver_specific": {} 00:05:21.640 } 00:05:21.640 ]' 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.640 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.640 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.640 [2024-12-13 09:07:15.453905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:21.640 [2024-12-13 09:07:15.453994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.640 [2024-12-13 09:07:15.454041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:05:21.640 [2024-12-13 09:07:15.454061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.640 [2024-12-13 09:07:15.456707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.641 [2024-12-13 09:07:15.456770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.641 Passthru0 00:05:21.641 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.641 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.641 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.641 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.641 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.641 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.641 { 00:05:21.641 "name": "Malloc0", 00:05:21.641 "aliases": [ 00:05:21.641 "0717b664-e42e-4863-887d-9f4b36b7ca78" 00:05:21.641 ], 00:05:21.641 "product_name": "Malloc disk", 00:05:21.641 "block_size": 512, 00:05:21.641 "num_blocks": 16384, 00:05:21.641 "uuid": "0717b664-e42e-4863-887d-9f4b36b7ca78", 00:05:21.641 "assigned_rate_limits": { 00:05:21.641 "rw_ios_per_sec": 0, 00:05:21.641 "rw_mbytes_per_sec": 0, 00:05:21.641 "r_mbytes_per_sec": 0, 00:05:21.641 "w_mbytes_per_sec": 0 00:05:21.641 }, 00:05:21.641 "claimed": true, 00:05:21.641 "claim_type": "exclusive_write", 00:05:21.641 "zoned": false, 00:05:21.641 "supported_io_types": { 00:05:21.641 "read": true, 00:05:21.641 "write": true, 00:05:21.641 "unmap": true, 00:05:21.641 "flush": true, 00:05:21.641 "reset": true, 00:05:21.641 "nvme_admin": false, 00:05:21.641 "nvme_io": false, 00:05:21.641 "nvme_io_md": false, 00:05:21.641 "write_zeroes": true, 00:05:21.641 "zcopy": true, 00:05:21.641 "get_zone_info": false, 00:05:21.641 "zone_management": false, 00:05:21.641 "zone_append": false, 00:05:21.641 "compare": false, 00:05:21.641 "compare_and_write": false, 00:05:21.641 "abort": true, 00:05:21.641 "seek_hole": false, 00:05:21.641 "seek_data": false, 00:05:21.641 "copy": true, 00:05:21.641 "nvme_iov_md": false 00:05:21.641 }, 00:05:21.641 "memory_domains": [ 00:05:21.641 { 00:05:21.641 "dma_device_id": "system", 00:05:21.641 "dma_device_type": 1 00:05:21.641 }, 00:05:21.641 { 00:05:21.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.641 "dma_device_type": 2 00:05:21.641 } 00:05:21.641 ], 00:05:21.641 "driver_specific": {} 00:05:21.641 }, 00:05:21.641 { 00:05:21.641 "name": "Passthru0", 00:05:21.641 "aliases": [ 00:05:21.641 "73ca6366-adb5-55f3-b688-c934c93776b2" 00:05:21.641 ], 00:05:21.641 "product_name": "passthru", 00:05:21.641 "block_size": 512, 00:05:21.641 "num_blocks": 16384, 00:05:21.641 "uuid": "73ca6366-adb5-55f3-b688-c934c93776b2", 00:05:21.641 "assigned_rate_limits": { 00:05:21.641 "rw_ios_per_sec": 0, 00:05:21.641 "rw_mbytes_per_sec": 0, 00:05:21.641 "r_mbytes_per_sec": 0, 00:05:21.641 "w_mbytes_per_sec": 0 00:05:21.641 }, 00:05:21.641 "claimed": false, 00:05:21.641 "zoned": false, 00:05:21.641 "supported_io_types": { 00:05:21.641 "read": true, 00:05:21.641 "write": true, 00:05:21.641 "unmap": true, 00:05:21.641 "flush": true, 00:05:21.641 "reset": true, 00:05:21.641 "nvme_admin": false, 00:05:21.641 "nvme_io": false, 00:05:21.641 "nvme_io_md": false, 00:05:21.641 "write_zeroes": true, 00:05:21.641 "zcopy": true, 00:05:21.641 "get_zone_info": false, 00:05:21.641 "zone_management": false, 00:05:21.641 "zone_append": false, 00:05:21.641 "compare": false, 00:05:21.641 "compare_and_write": false, 00:05:21.641 "abort": true, 00:05:21.641 "seek_hole": false, 00:05:21.641 "seek_data": false, 00:05:21.641 "copy": true, 00:05:21.641 "nvme_iov_md": false 00:05:21.641 }, 00:05:21.641 "memory_domains": [ 00:05:21.641 { 00:05:21.641 "dma_device_id": "system", 00:05:21.641 "dma_device_type": 1 00:05:21.641 }, 00:05:21.641 { 00:05:21.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.641 "dma_device_type": 2 00:05:21.641 } 00:05:21.641 ], 00:05:21.641 "driver_specific": { 00:05:21.641 "passthru": { 00:05:21.641 "name": "Passthru0", 00:05:21.641 "base_bdev_name": "Malloc0" 00:05:21.641 } 00:05:21.641 } 00:05:21.641 } 00:05:21.641 ]' 00:05:21.641 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:21.899 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.899 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.899 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.899 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.899 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.899 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.899 09:07:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.899 00:05:21.899 real 0m0.343s 00:05:21.899 user 0m0.214s 00:05:21.899 sys 0m0.042s 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.899 09:07:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.899 ************************************ 00:05:21.899 END TEST rpc_integrity 00:05:21.899 ************************************ 00:05:21.899 09:07:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:21.899 09:07:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.899 09:07:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.899 09:07:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.899 ************************************ 00:05:21.899 START TEST rpc_plugins 00:05:21.899 ************************************ 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:21.899 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.899 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:21.899 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.899 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:21.899 { 00:05:21.899 "name": "Malloc1", 00:05:21.899 "aliases": [ 00:05:21.899 "b97e2e96-1ac8-48a8-bba1-3fa89caec9fd" 00:05:21.899 ], 00:05:21.899 "product_name": "Malloc disk", 00:05:21.899 "block_size": 4096, 00:05:21.899 "num_blocks": 256, 00:05:21.899 "uuid": "b97e2e96-1ac8-48a8-bba1-3fa89caec9fd", 00:05:21.899 "assigned_rate_limits": { 00:05:21.899 "rw_ios_per_sec": 0, 00:05:21.899 "rw_mbytes_per_sec": 0, 00:05:21.899 "r_mbytes_per_sec": 0, 00:05:21.899 "w_mbytes_per_sec": 0 00:05:21.899 }, 00:05:21.899 "claimed": false, 00:05:21.899 "zoned": false, 00:05:21.899 "supported_io_types": { 00:05:21.899 "read": true, 00:05:21.899 "write": true, 00:05:21.899 "unmap": true, 00:05:21.899 "flush": true, 00:05:21.899 "reset": true, 00:05:21.899 "nvme_admin": false, 00:05:21.899 "nvme_io": false, 00:05:21.899 "nvme_io_md": false, 00:05:21.899 "write_zeroes": true, 00:05:21.899 "zcopy": true, 00:05:21.899 "get_zone_info": false, 00:05:21.899 "zone_management": false, 00:05:21.899 "zone_append": false, 00:05:21.899 "compare": false, 00:05:21.899 "compare_and_write": false, 00:05:21.899 "abort": true, 00:05:21.899 "seek_hole": false, 00:05:21.899 "seek_data": false, 00:05:21.899 "copy": true, 00:05:21.899 "nvme_iov_md": false 00:05:21.899 }, 00:05:21.899 "memory_domains": [ 00:05:21.899 { 00:05:21.899 "dma_device_id": "system", 00:05:21.899 "dma_device_type": 1 00:05:21.899 }, 00:05:21.899 { 00:05:21.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.899 "dma_device_type": 2 00:05:21.899 } 00:05:21.899 ], 00:05:21.899 "driver_specific": {} 00:05:21.899 } 00:05:21.899 ]' 00:05:21.899 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:21.899 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:21.899 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.899 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.899 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.157 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.157 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:22.157 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:22.157 09:07:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:22.157 00:05:22.157 real 0m0.159s 00:05:22.157 user 0m0.100s 00:05:22.157 sys 0m0.019s 00:05:22.157 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.157 ************************************ 00:05:22.157 END TEST rpc_plugins 00:05:22.157 09:07:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.157 ************************************ 00:05:22.157 09:07:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:22.157 09:07:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.157 09:07:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.157 09:07:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.157 ************************************ 00:05:22.157 START TEST rpc_trace_cmd_test 00:05:22.157 ************************************ 00:05:22.157 09:07:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:22.157 09:07:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:22.157 09:07:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:22.157 09:07:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.157 09:07:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.157 09:07:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.157 09:07:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:22.157 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59271", 00:05:22.157 "tpoint_group_mask": "0x8", 00:05:22.157 "iscsi_conn": { 00:05:22.157 "mask": "0x2", 00:05:22.157 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "scsi": { 00:05:22.158 "mask": "0x4", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "bdev": { 00:05:22.158 "mask": "0x8", 00:05:22.158 "tpoint_mask": "0xffffffffffffffff" 00:05:22.158 }, 00:05:22.158 "nvmf_rdma": { 00:05:22.158 "mask": "0x10", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "nvmf_tcp": { 00:05:22.158 "mask": "0x20", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "ftl": { 00:05:22.158 "mask": "0x40", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "blobfs": { 00:05:22.158 "mask": "0x80", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "dsa": { 00:05:22.158 "mask": "0x200", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "thread": { 00:05:22.158 "mask": "0x400", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "nvme_pcie": { 00:05:22.158 "mask": "0x800", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "iaa": { 00:05:22.158 "mask": "0x1000", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "nvme_tcp": { 00:05:22.158 "mask": "0x2000", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "bdev_nvme": { 00:05:22.158 "mask": "0x4000", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "sock": { 00:05:22.158 "mask": "0x8000", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "blob": { 00:05:22.158 "mask": "0x10000", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "bdev_raid": { 00:05:22.158 "mask": "0x20000", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 }, 00:05:22.158 "scheduler": { 00:05:22.158 "mask": "0x40000", 00:05:22.158 "tpoint_mask": "0x0" 00:05:22.158 } 00:05:22.158 }' 00:05:22.158 09:07:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:22.158 09:07:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:22.158 09:07:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:22.158 09:07:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:22.158 09:07:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:22.416 09:07:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:22.416 09:07:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:22.416 09:07:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:22.416 09:07:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:22.416 09:07:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:22.416 00:05:22.416 real 0m0.292s 00:05:22.416 user 0m0.251s 00:05:22.416 sys 0m0.028s 00:05:22.416 09:07:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.416 09:07:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.416 ************************************ 00:05:22.416 END TEST rpc_trace_cmd_test 00:05:22.416 ************************************ 00:05:22.416 09:07:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:22.416 09:07:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:22.416 09:07:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:22.416 09:07:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.416 09:07:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.416 09:07:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.416 ************************************ 00:05:22.416 START TEST rpc_daemon_integrity 00:05:22.416 ************************************ 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.416 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.675 { 00:05:22.675 "name": "Malloc2", 00:05:22.675 "aliases": [ 00:05:22.675 "b4cb7e12-3e90-48a5-9994-32dea3e76ff0" 00:05:22.675 ], 00:05:22.675 "product_name": "Malloc disk", 00:05:22.675 "block_size": 512, 00:05:22.675 "num_blocks": 16384, 00:05:22.675 "uuid": "b4cb7e12-3e90-48a5-9994-32dea3e76ff0", 00:05:22.675 "assigned_rate_limits": { 00:05:22.675 "rw_ios_per_sec": 0, 00:05:22.675 "rw_mbytes_per_sec": 0, 00:05:22.675 "r_mbytes_per_sec": 0, 00:05:22.675 "w_mbytes_per_sec": 0 00:05:22.675 }, 00:05:22.675 "claimed": false, 00:05:22.675 "zoned": false, 00:05:22.675 "supported_io_types": { 00:05:22.675 "read": true, 00:05:22.675 "write": true, 00:05:22.675 "unmap": true, 00:05:22.675 "flush": true, 00:05:22.675 "reset": true, 00:05:22.675 "nvme_admin": false, 00:05:22.675 "nvme_io": false, 00:05:22.675 "nvme_io_md": false, 00:05:22.675 "write_zeroes": true, 00:05:22.675 "zcopy": true, 00:05:22.675 "get_zone_info": false, 00:05:22.675 "zone_management": false, 00:05:22.675 "zone_append": false, 00:05:22.675 "compare": false, 00:05:22.675 "compare_and_write": false, 00:05:22.675 "abort": true, 00:05:22.675 "seek_hole": false, 00:05:22.675 "seek_data": false, 00:05:22.675 "copy": true, 00:05:22.675 "nvme_iov_md": false 00:05:22.675 }, 00:05:22.675 "memory_domains": [ 00:05:22.675 { 00:05:22.675 "dma_device_id": "system", 00:05:22.675 "dma_device_type": 1 00:05:22.675 }, 00:05:22.675 { 00:05:22.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.675 "dma_device_type": 2 00:05:22.675 } 00:05:22.675 ], 00:05:22.675 "driver_specific": {} 00:05:22.675 } 00:05:22.675 ]' 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.675 [2024-12-13 09:07:16.396603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:22.675 [2024-12-13 09:07:16.396692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.675 [2024-12-13 09:07:16.396721] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:05:22.675 [2024-12-13 09:07:16.396749] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.675 [2024-12-13 09:07:16.399338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.675 [2024-12-13 09:07:16.399395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.675 Passthru0 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.675 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:22.675 { 00:05:22.675 "name": "Malloc2", 00:05:22.675 "aliases": [ 00:05:22.675 "b4cb7e12-3e90-48a5-9994-32dea3e76ff0" 00:05:22.675 ], 00:05:22.675 "product_name": "Malloc disk", 00:05:22.675 "block_size": 512, 00:05:22.675 "num_blocks": 16384, 00:05:22.675 "uuid": "b4cb7e12-3e90-48a5-9994-32dea3e76ff0", 00:05:22.675 "assigned_rate_limits": { 00:05:22.675 "rw_ios_per_sec": 0, 00:05:22.675 "rw_mbytes_per_sec": 0, 00:05:22.675 "r_mbytes_per_sec": 0, 00:05:22.675 "w_mbytes_per_sec": 0 00:05:22.675 }, 00:05:22.675 "claimed": true, 00:05:22.675 "claim_type": "exclusive_write", 00:05:22.675 "zoned": false, 00:05:22.675 "supported_io_types": { 00:05:22.675 "read": true, 00:05:22.675 "write": true, 00:05:22.675 "unmap": true, 00:05:22.675 "flush": true, 00:05:22.675 "reset": true, 00:05:22.675 "nvme_admin": false, 00:05:22.675 "nvme_io": false, 00:05:22.675 "nvme_io_md": false, 00:05:22.675 "write_zeroes": true, 00:05:22.675 "zcopy": true, 00:05:22.675 "get_zone_info": false, 00:05:22.675 "zone_management": false, 00:05:22.675 "zone_append": false, 00:05:22.675 "compare": false, 00:05:22.675 "compare_and_write": false, 00:05:22.675 "abort": true, 00:05:22.675 "seek_hole": false, 00:05:22.675 "seek_data": false, 00:05:22.675 "copy": true, 00:05:22.675 "nvme_iov_md": false 00:05:22.675 }, 00:05:22.675 "memory_domains": [ 00:05:22.675 { 00:05:22.675 "dma_device_id": "system", 00:05:22.675 "dma_device_type": 1 00:05:22.675 }, 00:05:22.675 { 00:05:22.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.675 "dma_device_type": 2 00:05:22.675 } 00:05:22.675 ], 00:05:22.675 "driver_specific": {} 00:05:22.675 }, 00:05:22.675 { 00:05:22.675 "name": "Passthru0", 00:05:22.675 "aliases": [ 00:05:22.675 "4d28d7d5-0a8e-5e16-ab46-0e0243b65a51" 00:05:22.675 ], 00:05:22.675 "product_name": "passthru", 00:05:22.675 "block_size": 512, 00:05:22.675 "num_blocks": 16384, 00:05:22.675 "uuid": "4d28d7d5-0a8e-5e16-ab46-0e0243b65a51", 00:05:22.675 "assigned_rate_limits": { 00:05:22.675 "rw_ios_per_sec": 0, 00:05:22.675 "rw_mbytes_per_sec": 0, 00:05:22.675 "r_mbytes_per_sec": 0, 00:05:22.675 "w_mbytes_per_sec": 0 00:05:22.675 }, 00:05:22.675 "claimed": false, 00:05:22.675 "zoned": false, 00:05:22.675 "supported_io_types": { 00:05:22.675 "read": true, 00:05:22.675 "write": true, 00:05:22.675 "unmap": true, 00:05:22.675 "flush": true, 00:05:22.675 "reset": true, 00:05:22.675 "nvme_admin": false, 00:05:22.675 "nvme_io": false, 00:05:22.675 "nvme_io_md": false, 00:05:22.675 "write_zeroes": true, 00:05:22.675 "zcopy": true, 00:05:22.675 "get_zone_info": false, 00:05:22.675 "zone_management": false, 00:05:22.675 "zone_append": false, 00:05:22.675 "compare": false, 00:05:22.675 "compare_and_write": false, 00:05:22.675 "abort": true, 00:05:22.675 "seek_hole": false, 00:05:22.675 "seek_data": false, 00:05:22.675 "copy": true, 00:05:22.675 "nvme_iov_md": false 00:05:22.675 }, 00:05:22.675 "memory_domains": [ 00:05:22.675 { 00:05:22.675 "dma_device_id": "system", 00:05:22.675 "dma_device_type": 1 00:05:22.675 }, 00:05:22.675 { 00:05:22.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.675 "dma_device_type": 2 00:05:22.675 } 00:05:22.675 ], 00:05:22.675 "driver_specific": { 00:05:22.676 "passthru": { 00:05:22.676 "name": "Passthru0", 00:05:22.676 "base_bdev_name": "Malloc2" 00:05:22.676 } 00:05:22.676 } 00:05:22.676 } 00:05:22.676 ]' 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.676 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.934 09:07:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.934 00:05:22.934 real 0m0.342s 00:05:22.934 user 0m0.223s 00:05:22.934 sys 0m0.035s 00:05:22.934 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.934 ************************************ 00:05:22.934 END TEST rpc_daemon_integrity 00:05:22.934 09:07:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.934 ************************************ 00:05:22.934 09:07:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:22.934 09:07:16 rpc -- rpc/rpc.sh@84 -- # killprocess 59271 00:05:22.934 09:07:16 rpc -- common/autotest_common.sh@954 -- # '[' -z 59271 ']' 00:05:22.934 09:07:16 rpc -- common/autotest_common.sh@958 -- # kill -0 59271 00:05:22.934 09:07:16 rpc -- common/autotest_common.sh@959 -- # uname 00:05:22.934 09:07:16 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.934 09:07:16 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59271 00:05:22.934 killing process with pid 59271 00:05:22.934 09:07:16 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.934 09:07:16 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.934 09:07:16 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59271' 00:05:22.934 09:07:16 rpc -- common/autotest_common.sh@973 -- # kill 59271 00:05:22.934 09:07:16 rpc -- common/autotest_common.sh@978 -- # wait 59271 00:05:24.842 00:05:24.842 real 0m4.393s 00:05:24.842 user 0m5.228s 00:05:24.842 sys 0m0.748s 00:05:24.842 09:07:18 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.842 09:07:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.842 ************************************ 00:05:24.842 END TEST rpc 00:05:24.842 ************************************ 00:05:24.842 09:07:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:24.842 09:07:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.842 09:07:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.842 09:07:18 -- common/autotest_common.sh@10 -- # set +x 00:05:24.842 ************************************ 00:05:24.842 START TEST skip_rpc 00:05:24.842 ************************************ 00:05:24.842 09:07:18 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:24.842 * Looking for test storage... 00:05:24.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.843 09:07:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.843 --rc genhtml_branch_coverage=1 00:05:24.843 --rc genhtml_function_coverage=1 00:05:24.843 --rc genhtml_legend=1 00:05:24.843 --rc geninfo_all_blocks=1 00:05:24.843 --rc geninfo_unexecuted_blocks=1 00:05:24.843 00:05:24.843 ' 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.843 --rc genhtml_branch_coverage=1 00:05:24.843 --rc genhtml_function_coverage=1 00:05:24.843 --rc genhtml_legend=1 00:05:24.843 --rc geninfo_all_blocks=1 00:05:24.843 --rc geninfo_unexecuted_blocks=1 00:05:24.843 00:05:24.843 ' 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.843 --rc genhtml_branch_coverage=1 00:05:24.843 --rc genhtml_function_coverage=1 00:05:24.843 --rc genhtml_legend=1 00:05:24.843 --rc geninfo_all_blocks=1 00:05:24.843 --rc geninfo_unexecuted_blocks=1 00:05:24.843 00:05:24.843 ' 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.843 --rc genhtml_branch_coverage=1 00:05:24.843 --rc genhtml_function_coverage=1 00:05:24.843 --rc genhtml_legend=1 00:05:24.843 --rc geninfo_all_blocks=1 00:05:24.843 --rc geninfo_unexecuted_blocks=1 00:05:24.843 00:05:24.843 ' 00:05:24.843 09:07:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.843 09:07:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:24.843 09:07:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.843 09:07:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.843 ************************************ 00:05:24.843 START TEST skip_rpc 00:05:24.843 ************************************ 00:05:24.843 09:07:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:24.843 09:07:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59489 00:05:24.843 09:07:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.843 09:07:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:24.843 09:07:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:25.102 [2024-12-13 09:07:18.781611] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:25.102 [2024-12-13 09:07:18.781958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59489 ] 00:05:25.102 [2024-12-13 09:07:18.960046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.361 [2024-12-13 09:07:19.046028] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.361 [2024-12-13 09:07:19.228189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59489 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59489 ']' 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59489 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59489 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59489' 00:05:30.628 killing process with pid 59489 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59489 00:05:30.628 09:07:23 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59489 00:05:31.564 00:05:31.564 ************************************ 00:05:31.564 END TEST skip_rpc 00:05:31.564 ************************************ 00:05:31.564 real 0m6.748s 00:05:31.564 user 0m6.322s 00:05:31.564 sys 0m0.327s 00:05:31.564 09:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.564 09:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.564 09:07:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:31.564 09:07:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.564 09:07:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.564 09:07:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.564 ************************************ 00:05:31.564 START TEST skip_rpc_with_json 00:05:31.564 ************************************ 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:31.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59589 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59589 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59589 ']' 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.564 09:07:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.823 [2024-12-13 09:07:25.575590] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:31.823 [2024-12-13 09:07:25.576021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59589 ] 00:05:32.081 [2024-12-13 09:07:25.764554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.081 [2024-12-13 09:07:25.888530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.339 [2024-12-13 09:07:26.093807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.907 [2024-12-13 09:07:26.563995] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:32.907 request: 00:05:32.907 { 00:05:32.907 "trtype": "tcp", 00:05:32.907 "method": "nvmf_get_transports", 00:05:32.907 "req_id": 1 00:05:32.907 } 00:05:32.907 Got JSON-RPC error response 00:05:32.907 response: 00:05:32.907 { 00:05:32.907 "code": -19, 00:05:32.907 "message": "No such device" 00:05:32.907 } 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.907 [2024-12-13 09:07:26.576096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.907 09:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.907 { 00:05:32.907 "subsystems": [ 00:05:32.907 { 00:05:32.907 "subsystem": "fsdev", 00:05:32.907 "config": [ 00:05:32.907 { 00:05:32.907 "method": "fsdev_set_opts", 00:05:32.907 "params": { 00:05:32.907 "fsdev_io_pool_size": 65535, 00:05:32.907 "fsdev_io_cache_size": 256 00:05:32.907 } 00:05:32.907 } 00:05:32.907 ] 00:05:32.907 }, 00:05:32.907 { 00:05:32.907 "subsystem": "vfio_user_target", 00:05:32.907 "config": null 00:05:32.907 }, 00:05:32.907 { 00:05:32.907 "subsystem": "keyring", 00:05:32.907 "config": [] 00:05:32.907 }, 00:05:32.907 { 00:05:32.907 "subsystem": "iobuf", 00:05:32.907 "config": [ 00:05:32.907 { 00:05:32.907 "method": "iobuf_set_options", 00:05:32.907 "params": { 00:05:32.907 "small_pool_count": 8192, 00:05:32.907 "large_pool_count": 1024, 00:05:32.907 "small_bufsize": 8192, 00:05:32.907 "large_bufsize": 135168, 00:05:32.907 "enable_numa": false 00:05:32.907 } 00:05:32.907 } 00:05:32.907 ] 00:05:32.907 }, 00:05:32.907 { 00:05:32.907 "subsystem": "sock", 00:05:32.907 "config": [ 00:05:32.907 { 00:05:32.907 "method": "sock_set_default_impl", 00:05:32.908 "params": { 00:05:32.908 "impl_name": "uring" 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "sock_impl_set_options", 00:05:32.908 "params": { 00:05:32.908 "impl_name": "ssl", 00:05:32.908 "recv_buf_size": 4096, 00:05:32.908 "send_buf_size": 4096, 00:05:32.908 "enable_recv_pipe": true, 00:05:32.908 "enable_quickack": false, 00:05:32.908 "enable_placement_id": 0, 00:05:32.908 "enable_zerocopy_send_server": true, 00:05:32.908 "enable_zerocopy_send_client": false, 00:05:32.908 "zerocopy_threshold": 0, 00:05:32.908 "tls_version": 0, 00:05:32.908 "enable_ktls": false 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "sock_impl_set_options", 00:05:32.908 "params": { 00:05:32.908 "impl_name": "posix", 00:05:32.908 "recv_buf_size": 2097152, 00:05:32.908 "send_buf_size": 2097152, 00:05:32.908 "enable_recv_pipe": true, 00:05:32.908 "enable_quickack": false, 00:05:32.908 "enable_placement_id": 0, 00:05:32.908 "enable_zerocopy_send_server": true, 00:05:32.908 "enable_zerocopy_send_client": false, 00:05:32.908 "zerocopy_threshold": 0, 00:05:32.908 "tls_version": 0, 00:05:32.908 "enable_ktls": false 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "sock_impl_set_options", 00:05:32.908 "params": { 00:05:32.908 "impl_name": "uring", 00:05:32.908 "recv_buf_size": 2097152, 00:05:32.908 "send_buf_size": 2097152, 00:05:32.908 "enable_recv_pipe": true, 00:05:32.908 "enable_quickack": false, 00:05:32.908 "enable_placement_id": 0, 00:05:32.908 "enable_zerocopy_send_server": false, 00:05:32.908 "enable_zerocopy_send_client": false, 00:05:32.908 "zerocopy_threshold": 0, 00:05:32.908 "tls_version": 0, 00:05:32.908 "enable_ktls": false 00:05:32.908 } 00:05:32.908 } 00:05:32.908 ] 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "vmd", 00:05:32.908 "config": [] 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "accel", 00:05:32.908 "config": [ 00:05:32.908 { 00:05:32.908 "method": "accel_set_options", 00:05:32.908 "params": { 00:05:32.908 "small_cache_size": 128, 00:05:32.908 "large_cache_size": 16, 00:05:32.908 "task_count": 2048, 00:05:32.908 "sequence_count": 2048, 00:05:32.908 "buf_count": 2048 00:05:32.908 } 00:05:32.908 } 00:05:32.908 ] 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "bdev", 00:05:32.908 "config": [ 00:05:32.908 { 00:05:32.908 "method": "bdev_set_options", 00:05:32.908 "params": { 00:05:32.908 "bdev_io_pool_size": 65535, 00:05:32.908 "bdev_io_cache_size": 256, 00:05:32.908 "bdev_auto_examine": true, 00:05:32.908 "iobuf_small_cache_size": 128, 00:05:32.908 "iobuf_large_cache_size": 16 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "bdev_raid_set_options", 00:05:32.908 "params": { 00:05:32.908 "process_window_size_kb": 1024, 00:05:32.908 "process_max_bandwidth_mb_sec": 0 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "bdev_iscsi_set_options", 00:05:32.908 "params": { 00:05:32.908 "timeout_sec": 30 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "bdev_nvme_set_options", 00:05:32.908 "params": { 00:05:32.908 "action_on_timeout": "none", 00:05:32.908 "timeout_us": 0, 00:05:32.908 "timeout_admin_us": 0, 00:05:32.908 "keep_alive_timeout_ms": 10000, 00:05:32.908 "arbitration_burst": 0, 00:05:32.908 "low_priority_weight": 0, 00:05:32.908 "medium_priority_weight": 0, 00:05:32.908 "high_priority_weight": 0, 00:05:32.908 "nvme_adminq_poll_period_us": 10000, 00:05:32.908 "nvme_ioq_poll_period_us": 0, 00:05:32.908 "io_queue_requests": 0, 00:05:32.908 "delay_cmd_submit": true, 00:05:32.908 "transport_retry_count": 4, 00:05:32.908 "bdev_retry_count": 3, 00:05:32.908 "transport_ack_timeout": 0, 00:05:32.908 "ctrlr_loss_timeout_sec": 0, 00:05:32.908 "reconnect_delay_sec": 0, 00:05:32.908 "fast_io_fail_timeout_sec": 0, 00:05:32.908 "disable_auto_failback": false, 00:05:32.908 "generate_uuids": false, 00:05:32.908 "transport_tos": 0, 00:05:32.908 "nvme_error_stat": false, 00:05:32.908 "rdma_srq_size": 0, 00:05:32.908 "io_path_stat": false, 00:05:32.908 "allow_accel_sequence": false, 00:05:32.908 "rdma_max_cq_size": 0, 00:05:32.908 "rdma_cm_event_timeout_ms": 0, 00:05:32.908 "dhchap_digests": [ 00:05:32.908 "sha256", 00:05:32.908 "sha384", 00:05:32.908 "sha512" 00:05:32.908 ], 00:05:32.908 "dhchap_dhgroups": [ 00:05:32.908 "null", 00:05:32.908 "ffdhe2048", 00:05:32.908 "ffdhe3072", 00:05:32.908 "ffdhe4096", 00:05:32.908 "ffdhe6144", 00:05:32.908 "ffdhe8192" 00:05:32.908 ], 00:05:32.908 "rdma_umr_per_io": false 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "bdev_nvme_set_hotplug", 00:05:32.908 "params": { 00:05:32.908 "period_us": 100000, 00:05:32.908 "enable": false 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "bdev_wait_for_examine" 00:05:32.908 } 00:05:32.908 ] 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "scsi", 00:05:32.908 "config": null 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "scheduler", 00:05:32.908 "config": [ 00:05:32.908 { 00:05:32.908 "method": "framework_set_scheduler", 00:05:32.908 "params": { 00:05:32.908 "name": "static" 00:05:32.908 } 00:05:32.908 } 00:05:32.908 ] 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "vhost_scsi", 00:05:32.908 "config": [] 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "vhost_blk", 00:05:32.908 "config": [] 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "ublk", 00:05:32.908 "config": [] 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "nbd", 00:05:32.908 "config": [] 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "nvmf", 00:05:32.908 "config": [ 00:05:32.908 { 00:05:32.908 "method": "nvmf_set_config", 00:05:32.908 "params": { 00:05:32.908 "discovery_filter": "match_any", 00:05:32.908 "admin_cmd_passthru": { 00:05:32.908 "identify_ctrlr": false 00:05:32.908 }, 00:05:32.908 "dhchap_digests": [ 00:05:32.908 "sha256", 00:05:32.908 "sha384", 00:05:32.908 "sha512" 00:05:32.908 ], 00:05:32.908 "dhchap_dhgroups": [ 00:05:32.908 "null", 00:05:32.908 "ffdhe2048", 00:05:32.908 "ffdhe3072", 00:05:32.908 "ffdhe4096", 00:05:32.908 "ffdhe6144", 00:05:32.908 "ffdhe8192" 00:05:32.908 ] 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "nvmf_set_max_subsystems", 00:05:32.908 "params": { 00:05:32.908 "max_subsystems": 1024 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "nvmf_set_crdt", 00:05:32.908 "params": { 00:05:32.908 "crdt1": 0, 00:05:32.908 "crdt2": 0, 00:05:32.908 "crdt3": 0 00:05:32.908 } 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "method": "nvmf_create_transport", 00:05:32.908 "params": { 00:05:32.908 "trtype": "TCP", 00:05:32.908 "max_queue_depth": 128, 00:05:32.908 "max_io_qpairs_per_ctrlr": 127, 00:05:32.908 "in_capsule_data_size": 4096, 00:05:32.908 "max_io_size": 131072, 00:05:32.908 "io_unit_size": 131072, 00:05:32.908 "max_aq_depth": 128, 00:05:32.908 "num_shared_buffers": 511, 00:05:32.908 "buf_cache_size": 4294967295, 00:05:32.908 "dif_insert_or_strip": false, 00:05:32.908 "zcopy": false, 00:05:32.908 "c2h_success": true, 00:05:32.908 "sock_priority": 0, 00:05:32.908 "abort_timeout_sec": 1, 00:05:32.908 "ack_timeout": 0, 00:05:32.908 "data_wr_pool_size": 0 00:05:32.908 } 00:05:32.908 } 00:05:32.908 ] 00:05:32.908 }, 00:05:32.908 { 00:05:32.908 "subsystem": "iscsi", 00:05:32.908 "config": [ 00:05:32.908 { 00:05:32.908 "method": "iscsi_set_options", 00:05:32.908 "params": { 00:05:32.908 "node_base": "iqn.2016-06.io.spdk", 00:05:32.908 "max_sessions": 128, 00:05:32.908 "max_connections_per_session": 2, 00:05:32.908 "max_queue_depth": 64, 00:05:32.908 "default_time2wait": 2, 00:05:32.908 "default_time2retain": 20, 00:05:32.908 "first_burst_length": 8192, 00:05:32.908 "immediate_data": true, 00:05:32.908 "allow_duplicated_isid": false, 00:05:32.908 "error_recovery_level": 0, 00:05:32.908 "nop_timeout": 60, 00:05:32.908 "nop_in_interval": 30, 00:05:32.908 "disable_chap": false, 00:05:32.908 "require_chap": false, 00:05:32.908 "mutual_chap": false, 00:05:32.908 "chap_group": 0, 00:05:32.908 "max_large_datain_per_connection": 64, 00:05:32.908 "max_r2t_per_connection": 4, 00:05:32.908 "pdu_pool_size": 36864, 00:05:32.908 "immediate_data_pool_size": 16384, 00:05:32.908 "data_out_pool_size": 2048 00:05:32.908 } 00:05:32.908 } 00:05:32.908 ] 00:05:32.908 } 00:05:32.908 ] 00:05:32.908 } 00:05:32.908 09:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:32.908 09:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59589 00:05:32.908 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59589 ']' 00:05:32.908 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59589 00:05:32.908 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:32.908 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.908 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59589 00:05:33.167 killing process with pid 59589 00:05:33.167 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.167 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.167 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59589' 00:05:33.167 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59589 00:05:33.167 09:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59589 00:05:35.070 09:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59633 00:05:35.070 09:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:35.070 09:07:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59633 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59633 ']' 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59633 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59633 00:05:40.339 killing process with pid 59633 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59633' 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59633 00:05:40.339 09:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59633 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:41.722 ************************************ 00:05:41.722 END TEST skip_rpc_with_json 00:05:41.722 ************************************ 00:05:41.722 00:05:41.722 real 0m9.867s 00:05:41.722 user 0m9.588s 00:05:41.722 sys 0m0.704s 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.722 09:07:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:41.722 09:07:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.722 09:07:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.722 09:07:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.722 ************************************ 00:05:41.722 START TEST skip_rpc_with_delay 00:05:41.722 ************************************ 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:41.722 [2024-12-13 09:07:35.508249] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.722 00:05:41.722 real 0m0.209s 00:05:41.722 user 0m0.119s 00:05:41.722 sys 0m0.087s 00:05:41.722 ************************************ 00:05:41.722 END TEST skip_rpc_with_delay 00:05:41.722 ************************************ 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.722 09:07:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:41.981 09:07:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:41.981 09:07:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:41.981 09:07:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:41.981 09:07:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.981 09:07:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.981 09:07:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.981 ************************************ 00:05:41.981 START TEST exit_on_failed_rpc_init 00:05:41.981 ************************************ 00:05:41.981 09:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:41.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.981 09:07:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59761 00:05:41.981 09:07:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59761 00:05:41.981 09:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59761 ']' 00:05:41.981 09:07:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.981 09:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.981 09:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.981 09:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.981 09:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.981 09:07:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.981 [2024-12-13 09:07:35.764121] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:41.981 [2024-12-13 09:07:35.764271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59761 ] 00:05:42.240 [2024-12-13 09:07:35.928731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.240 [2024-12-13 09:07:36.014561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.499 [2024-12-13 09:07:36.203480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:43.067 09:07:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.067 [2024-12-13 09:07:36.811353] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:43.067 [2024-12-13 09:07:36.811513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59779 ] 00:05:43.326 [2024-12-13 09:07:36.988765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.326 [2024-12-13 09:07:37.115761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.326 [2024-12-13 09:07:37.115898] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:43.326 [2024-12-13 09:07:37.115925] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:43.326 [2024-12-13 09:07:37.115947] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59761 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59761 ']' 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59761 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59761 00:05:43.585 killing process with pid 59761 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59761' 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59761 00:05:43.585 09:07:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59761 00:05:45.491 00:05:45.491 real 0m3.508s 00:05:45.491 user 0m4.051s 00:05:45.491 sys 0m0.479s 00:05:45.491 09:07:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.491 ************************************ 00:05:45.491 END TEST exit_on_failed_rpc_init 00:05:45.491 ************************************ 00:05:45.491 09:07:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.491 09:07:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:45.491 00:05:45.491 real 0m20.737s 00:05:45.491 user 0m20.270s 00:05:45.491 sys 0m1.798s 00:05:45.491 ************************************ 00:05:45.491 END TEST skip_rpc 00:05:45.491 ************************************ 00:05:45.491 09:07:39 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.491 09:07:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.491 09:07:39 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.491 09:07:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.491 09:07:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.491 09:07:39 -- common/autotest_common.sh@10 -- # set +x 00:05:45.491 ************************************ 00:05:45.491 START TEST rpc_client 00:05:45.491 ************************************ 00:05:45.491 09:07:39 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.491 * Looking for test storage... 00:05:45.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:45.491 09:07:39 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.491 09:07:39 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.491 09:07:39 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.751 09:07:39 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.751 09:07:39 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:45.751 09:07:39 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.751 09:07:39 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.751 --rc genhtml_branch_coverage=1 00:05:45.751 --rc genhtml_function_coverage=1 00:05:45.751 --rc genhtml_legend=1 00:05:45.751 --rc geninfo_all_blocks=1 00:05:45.751 --rc geninfo_unexecuted_blocks=1 00:05:45.751 00:05:45.751 ' 00:05:45.751 09:07:39 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.751 --rc genhtml_branch_coverage=1 00:05:45.751 --rc genhtml_function_coverage=1 00:05:45.751 --rc genhtml_legend=1 00:05:45.751 --rc geninfo_all_blocks=1 00:05:45.751 --rc geninfo_unexecuted_blocks=1 00:05:45.751 00:05:45.751 ' 00:05:45.751 09:07:39 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.751 --rc genhtml_branch_coverage=1 00:05:45.751 --rc genhtml_function_coverage=1 00:05:45.751 --rc genhtml_legend=1 00:05:45.751 --rc geninfo_all_blocks=1 00:05:45.751 --rc geninfo_unexecuted_blocks=1 00:05:45.751 00:05:45.751 ' 00:05:45.751 09:07:39 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.751 --rc genhtml_branch_coverage=1 00:05:45.751 --rc genhtml_function_coverage=1 00:05:45.751 --rc genhtml_legend=1 00:05:45.751 --rc geninfo_all_blocks=1 00:05:45.751 --rc geninfo_unexecuted_blocks=1 00:05:45.751 00:05:45.751 ' 00:05:45.751 09:07:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:45.751 OK 00:05:45.751 09:07:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.751 00:05:45.751 real 0m0.249s 00:05:45.751 user 0m0.150s 00:05:45.751 sys 0m0.106s 00:05:45.751 ************************************ 00:05:45.751 END TEST rpc_client 00:05:45.751 ************************************ 00:05:45.751 09:07:39 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.751 09:07:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:45.751 09:07:39 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.751 09:07:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.751 09:07:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.751 09:07:39 -- common/autotest_common.sh@10 -- # set +x 00:05:45.751 ************************************ 00:05:45.751 START TEST json_config 00:05:45.751 ************************************ 00:05:45.751 09:07:39 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.751 09:07:39 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.751 09:07:39 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.751 09:07:39 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.013 09:07:39 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.013 09:07:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.013 09:07:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.013 09:07:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.013 09:07:39 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.013 09:07:39 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.013 09:07:39 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.013 09:07:39 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.013 09:07:39 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.013 09:07:39 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.013 09:07:39 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.013 09:07:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.013 09:07:39 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:46.013 09:07:39 json_config -- scripts/common.sh@345 -- # : 1 00:05:46.013 09:07:39 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.013 09:07:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.013 09:07:39 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:46.013 09:07:39 json_config -- scripts/common.sh@353 -- # local d=1 00:05:46.013 09:07:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.013 09:07:39 json_config -- scripts/common.sh@355 -- # echo 1 00:05:46.013 09:07:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.013 09:07:39 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:46.013 09:07:39 json_config -- scripts/common.sh@353 -- # local d=2 00:05:46.013 09:07:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.013 09:07:39 json_config -- scripts/common.sh@355 -- # echo 2 00:05:46.013 09:07:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.013 09:07:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.013 09:07:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.013 09:07:39 json_config -- scripts/common.sh@368 -- # return 0 00:05:46.013 09:07:39 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.013 09:07:39 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.013 --rc genhtml_branch_coverage=1 00:05:46.013 --rc genhtml_function_coverage=1 00:05:46.013 --rc genhtml_legend=1 00:05:46.013 --rc geninfo_all_blocks=1 00:05:46.013 --rc geninfo_unexecuted_blocks=1 00:05:46.013 00:05:46.013 ' 00:05:46.013 09:07:39 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.013 --rc genhtml_branch_coverage=1 00:05:46.013 --rc genhtml_function_coverage=1 00:05:46.013 --rc genhtml_legend=1 00:05:46.013 --rc geninfo_all_blocks=1 00:05:46.013 --rc geninfo_unexecuted_blocks=1 00:05:46.013 00:05:46.013 ' 00:05:46.013 09:07:39 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.013 --rc genhtml_branch_coverage=1 00:05:46.013 --rc genhtml_function_coverage=1 00:05:46.013 --rc genhtml_legend=1 00:05:46.013 --rc geninfo_all_blocks=1 00:05:46.013 --rc geninfo_unexecuted_blocks=1 00:05:46.013 00:05:46.013 ' 00:05:46.013 09:07:39 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.013 --rc genhtml_branch_coverage=1 00:05:46.013 --rc genhtml_function_coverage=1 00:05:46.013 --rc genhtml_legend=1 00:05:46.013 --rc geninfo_all_blocks=1 00:05:46.013 --rc geninfo_unexecuted_blocks=1 00:05:46.013 00:05:46.013 ' 00:05:46.013 09:07:39 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.013 09:07:39 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.013 09:07:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.013 09:07:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.013 09:07:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.013 09:07:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.014 09:07:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.014 09:07:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.014 09:07:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.014 09:07:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:46.014 09:07:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.014 09:07:39 json_config -- nvmf/common.sh@51 -- # : 0 00:05:46.014 09:07:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.014 09:07:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.014 09:07:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.014 09:07:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.014 09:07:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.014 09:07:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.014 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.014 09:07:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.014 09:07:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.014 09:07:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:46.014 INFO: JSON configuration test init 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:46.014 09:07:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.014 09:07:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:46.014 09:07:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.014 09:07:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.014 09:07:39 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:46.014 09:07:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.014 Waiting for target to run... 00:05:46.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.014 09:07:39 json_config -- json_config/common.sh@10 -- # shift 00:05:46.014 09:07:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.014 09:07:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.014 09:07:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.014 09:07:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.014 09:07:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.014 09:07:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59938 00:05:46.014 09:07:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.014 09:07:39 json_config -- json_config/common.sh@25 -- # waitforlisten 59938 /var/tmp/spdk_tgt.sock 00:05:46.014 09:07:39 json_config -- common/autotest_common.sh@835 -- # '[' -z 59938 ']' 00:05:46.014 09:07:39 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.014 09:07:39 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:46.014 09:07:39 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.014 09:07:39 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.014 09:07:39 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.014 09:07:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.014 [2024-12-13 09:07:39.869754] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:46.014 [2024-12-13 09:07:39.870232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59938 ] 00:05:46.626 [2024-12-13 09:07:40.216593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.626 [2024-12-13 09:07:40.296192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.194 09:07:40 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.194 09:07:40 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:47.194 09:07:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:47.194 00:05:47.194 09:07:40 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:47.194 09:07:40 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:47.194 09:07:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.194 09:07:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.194 09:07:40 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:47.194 09:07:40 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:47.194 09:07:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.194 09:07:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.194 09:07:40 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:47.194 09:07:40 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:47.194 09:07:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:47.762 [2024-12-13 09:07:41.389397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.330 09:07:41 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:48.330 09:07:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:48.330 09:07:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.330 09:07:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.330 09:07:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:48.330 09:07:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:48.330 09:07:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:48.330 09:07:41 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:48.330 09:07:41 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:48.330 09:07:41 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:48.330 09:07:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:48.330 09:07:41 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:48.330 09:07:42 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:48.330 09:07:42 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:48.330 09:07:42 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:48.330 09:07:42 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:48.330 09:07:42 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:48.330 09:07:42 json_config -- json_config/json_config.sh@54 -- # sort 00:05:48.330 09:07:42 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:48.330 09:07:42 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:48.330 09:07:42 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:48.330 09:07:42 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:48.330 09:07:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.330 09:07:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:48.590 09:07:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.590 09:07:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:48.590 09:07:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.590 09:07:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.919 MallocForNvmf0 00:05:48.919 09:07:42 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.919 09:07:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.919 MallocForNvmf1 00:05:48.919 09:07:42 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.919 09:07:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:49.178 [2024-12-13 09:07:43.049309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.437 09:07:43 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.437 09:07:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.437 09:07:43 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.437 09:07:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.005 09:07:43 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.005 09:07:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.005 09:07:43 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:50.005 09:07:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:50.264 [2024-12-13 09:07:44.034189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:50.264 09:07:44 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:50.264 09:07:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.264 09:07:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.264 09:07:44 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:50.264 09:07:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.264 09:07:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.264 09:07:44 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:50.264 09:07:44 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.264 09:07:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.523 MallocBdevForConfigChangeCheck 00:05:50.782 09:07:44 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:50.782 09:07:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.782 09:07:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.782 09:07:44 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:50.782 09:07:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.041 INFO: shutting down applications... 00:05:51.041 09:07:44 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:51.041 09:07:44 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:51.041 09:07:44 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:51.041 09:07:44 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:51.041 09:07:44 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:51.608 Calling clear_iscsi_subsystem 00:05:51.608 Calling clear_nvmf_subsystem 00:05:51.608 Calling clear_nbd_subsystem 00:05:51.608 Calling clear_ublk_subsystem 00:05:51.608 Calling clear_vhost_blk_subsystem 00:05:51.608 Calling clear_vhost_scsi_subsystem 00:05:51.608 Calling clear_bdev_subsystem 00:05:51.608 09:07:45 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:51.608 09:07:45 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:51.608 09:07:45 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:51.608 09:07:45 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.608 09:07:45 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:51.608 09:07:45 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:51.866 09:07:45 json_config -- json_config/json_config.sh@352 -- # break 00:05:51.866 09:07:45 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:51.867 09:07:45 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:51.867 09:07:45 json_config -- json_config/common.sh@31 -- # local app=target 00:05:51.867 09:07:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.867 09:07:45 json_config -- json_config/common.sh@35 -- # [[ -n 59938 ]] 00:05:51.867 09:07:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59938 00:05:51.867 09:07:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.867 09:07:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.867 09:07:45 json_config -- json_config/common.sh@41 -- # kill -0 59938 00:05:51.867 09:07:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.434 09:07:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.434 09:07:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.434 09:07:46 json_config -- json_config/common.sh@41 -- # kill -0 59938 00:05:52.434 09:07:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.002 09:07:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.002 09:07:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.002 09:07:46 json_config -- json_config/common.sh@41 -- # kill -0 59938 00:05:53.002 09:07:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:53.002 09:07:46 json_config -- json_config/common.sh@43 -- # break 00:05:53.002 09:07:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:53.002 SPDK target shutdown done 00:05:53.002 INFO: relaunching applications... 00:05:53.002 09:07:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:53.002 09:07:46 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:53.002 09:07:46 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.002 09:07:46 json_config -- json_config/common.sh@9 -- # local app=target 00:05:53.002 09:07:46 json_config -- json_config/common.sh@10 -- # shift 00:05:53.002 09:07:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:53.002 09:07:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:53.002 09:07:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:53.002 09:07:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.002 09:07:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.002 09:07:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60152 00:05:53.002 09:07:46 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.002 09:07:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:53.002 Waiting for target to run... 00:05:53.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.002 09:07:46 json_config -- json_config/common.sh@25 -- # waitforlisten 60152 /var/tmp/spdk_tgt.sock 00:05:53.002 09:07:46 json_config -- common/autotest_common.sh@835 -- # '[' -z 60152 ']' 00:05:53.002 09:07:46 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.002 09:07:46 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.002 09:07:46 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.002 09:07:46 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.002 09:07:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.002 [2024-12-13 09:07:46.825449] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:53.002 [2024-12-13 09:07:46.825878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60152 ] 00:05:53.571 [2024-12-13 09:07:47.153884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.571 [2024-12-13 09:07:47.234660] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.831 [2024-12-13 09:07:47.541054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.399 [2024-12-13 09:07:48.104674] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.399 [2024-12-13 09:07:48.136869] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:54.399 00:05:54.399 INFO: Checking if target configuration is the same... 00:05:54.399 09:07:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.399 09:07:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:54.399 09:07:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.399 09:07:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:54.399 09:07:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:54.399 09:07:48 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.399 09:07:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:54.399 09:07:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.399 + '[' 2 -ne 2 ']' 00:05:54.399 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:54.399 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:54.399 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:54.399 +++ basename /dev/fd/62 00:05:54.399 ++ mktemp /tmp/62.XXX 00:05:54.399 + tmp_file_1=/tmp/62.EUd 00:05:54.399 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.399 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.399 + tmp_file_2=/tmp/spdk_tgt_config.json.a1l 00:05:54.399 + ret=0 00:05:54.399 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:54.968 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:54.968 + diff -u /tmp/62.EUd /tmp/spdk_tgt_config.json.a1l 00:05:54.968 INFO: JSON config files are the same 00:05:54.968 + echo 'INFO: JSON config files are the same' 00:05:54.968 + rm /tmp/62.EUd /tmp/spdk_tgt_config.json.a1l 00:05:54.968 + exit 0 00:05:54.968 INFO: changing configuration and checking if this can be detected... 00:05:54.968 09:07:48 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:54.968 09:07:48 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:54.968 09:07:48 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:54.968 09:07:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.228 09:07:48 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.228 09:07:48 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:55.228 09:07:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.228 + '[' 2 -ne 2 ']' 00:05:55.228 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:55.228 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:55.228 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:55.228 +++ basename /dev/fd/62 00:05:55.228 ++ mktemp /tmp/62.XXX 00:05:55.228 + tmp_file_1=/tmp/62.48O 00:05:55.228 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.228 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.228 + tmp_file_2=/tmp/spdk_tgt_config.json.l5N 00:05:55.228 + ret=0 00:05:55.228 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.796 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:55.796 + diff -u /tmp/62.48O /tmp/spdk_tgt_config.json.l5N 00:05:55.796 + ret=1 00:05:55.796 + echo '=== Start of file: /tmp/62.48O ===' 00:05:55.796 + cat /tmp/62.48O 00:05:55.796 + echo '=== End of file: /tmp/62.48O ===' 00:05:55.796 + echo '' 00:05:55.796 + echo '=== Start of file: /tmp/spdk_tgt_config.json.l5N ===' 00:05:55.796 + cat /tmp/spdk_tgt_config.json.l5N 00:05:55.796 + echo '=== End of file: /tmp/spdk_tgt_config.json.l5N ===' 00:05:55.796 + echo '' 00:05:55.796 + rm /tmp/62.48O /tmp/spdk_tgt_config.json.l5N 00:05:55.796 + exit 1 00:05:55.796 INFO: configuration change detected. 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@324 -- # [[ -n 60152 ]] 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.796 09:07:49 json_config -- json_config/json_config.sh@330 -- # killprocess 60152 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@954 -- # '[' -z 60152 ']' 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@958 -- # kill -0 60152 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@959 -- # uname 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60152 00:05:55.796 killing process with pid 60152 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.796 09:07:49 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.797 09:07:49 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60152' 00:05:55.797 09:07:49 json_config -- common/autotest_common.sh@973 -- # kill 60152 00:05:55.797 09:07:49 json_config -- common/autotest_common.sh@978 -- # wait 60152 00:05:56.735 09:07:50 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.735 09:07:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:56.735 09:07:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.735 09:07:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.735 INFO: Success 00:05:56.735 09:07:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:56.735 09:07:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:56.735 00:05:56.735 real 0m10.842s 00:05:56.735 user 0m14.709s 00:05:56.735 sys 0m1.763s 00:05:56.735 09:07:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.735 ************************************ 00:05:56.735 END TEST json_config 00:05:56.735 ************************************ 00:05:56.735 09:07:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.735 09:07:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.735 09:07:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.735 09:07:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.735 09:07:50 -- common/autotest_common.sh@10 -- # set +x 00:05:56.735 ************************************ 00:05:56.735 START TEST json_config_extra_key 00:05:56.735 ************************************ 00:05:56.735 09:07:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:56.735 09:07:50 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.735 09:07:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.735 09:07:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.735 09:07:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:56.735 09:07:50 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.735 09:07:50 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.735 --rc genhtml_branch_coverage=1 00:05:56.735 --rc genhtml_function_coverage=1 00:05:56.735 --rc genhtml_legend=1 00:05:56.735 --rc geninfo_all_blocks=1 00:05:56.735 --rc geninfo_unexecuted_blocks=1 00:05:56.735 00:05:56.735 ' 00:05:56.735 09:07:50 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.735 --rc genhtml_branch_coverage=1 00:05:56.735 --rc genhtml_function_coverage=1 00:05:56.735 --rc genhtml_legend=1 00:05:56.735 --rc geninfo_all_blocks=1 00:05:56.735 --rc geninfo_unexecuted_blocks=1 00:05:56.735 00:05:56.735 ' 00:05:56.735 09:07:50 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.735 --rc genhtml_branch_coverage=1 00:05:56.735 --rc genhtml_function_coverage=1 00:05:56.735 --rc genhtml_legend=1 00:05:56.735 --rc geninfo_all_blocks=1 00:05:56.735 --rc geninfo_unexecuted_blocks=1 00:05:56.735 00:05:56.735 ' 00:05:56.735 09:07:50 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.735 --rc genhtml_branch_coverage=1 00:05:56.735 --rc genhtml_function_coverage=1 00:05:56.735 --rc genhtml_legend=1 00:05:56.735 --rc geninfo_all_blocks=1 00:05:56.735 --rc geninfo_unexecuted_blocks=1 00:05:56.735 00:05:56.735 ' 00:05:56.735 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.735 09:07:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.735 09:07:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.735 09:07:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.735 09:07:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.735 09:07:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:56.735 09:07:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.735 09:07:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.736 09:07:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.736 09:07:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:56.736 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:56.736 09:07:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:56.736 09:07:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:56.736 09:07:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:56.736 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:56.736 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:56.736 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:56.736 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:56.736 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:56.736 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:56.736 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:56.996 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:56.996 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:56.996 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.996 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:56.996 INFO: launching applications... 00:05:56.996 09:07:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60318 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.996 Waiting for target to run... 00:05:56.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60318 /var/tmp/spdk_tgt.sock 00:05:56.996 09:07:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.996 09:07:50 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 60318 ']' 00:05:56.996 09:07:50 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.996 09:07:50 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.996 09:07:50 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.996 09:07:50 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.996 09:07:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:56.996 [2024-12-13 09:07:50.766114] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:56.996 [2024-12-13 09:07:50.766356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60318 ] 00:05:57.256 [2024-12-13 09:07:51.109441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.515 [2024-12-13 09:07:51.188547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.515 [2024-12-13 09:07:51.360082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.084 00:05:58.084 INFO: shutting down applications... 00:05:58.084 09:07:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.084 09:07:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:58.084 09:07:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:58.084 09:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:58.084 09:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:58.084 09:07:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:58.084 09:07:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.084 09:07:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60318 ]] 00:05:58.084 09:07:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60318 00:05:58.084 09:07:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.084 09:07:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.084 09:07:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60318 00:05:58.084 09:07:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.652 09:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.652 09:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.652 09:07:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60318 00:05:58.652 09:07:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.911 09:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.911 09:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.911 09:07:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60318 00:05:58.911 09:07:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.480 09:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.480 09:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.480 09:07:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60318 00:05:59.480 09:07:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.049 09:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.049 09:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.049 09:07:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60318 00:06:00.049 09:07:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:00.617 09:07:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:00.617 09:07:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.617 SPDK target shutdown done 00:06:00.617 Success 00:06:00.617 09:07:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60318 00:06:00.617 09:07:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:00.617 09:07:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:00.617 09:07:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:00.617 09:07:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:00.618 09:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:00.618 00:06:00.618 real 0m3.880s 00:06:00.618 user 0m3.296s 00:06:00.618 sys 0m0.534s 00:06:00.618 09:07:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.618 09:07:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:00.618 ************************************ 00:06:00.618 END TEST json_config_extra_key 00:06:00.618 ************************************ 00:06:00.618 09:07:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.618 09:07:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.618 09:07:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.618 09:07:54 -- common/autotest_common.sh@10 -- # set +x 00:06:00.618 ************************************ 00:06:00.618 START TEST alias_rpc 00:06:00.618 ************************************ 00:06:00.618 09:07:54 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.618 * Looking for test storage... 00:06:00.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:00.618 09:07:54 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.618 09:07:54 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.618 09:07:54 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.877 09:07:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.877 --rc genhtml_branch_coverage=1 00:06:00.877 --rc genhtml_function_coverage=1 00:06:00.877 --rc genhtml_legend=1 00:06:00.877 --rc geninfo_all_blocks=1 00:06:00.877 --rc geninfo_unexecuted_blocks=1 00:06:00.877 00:06:00.877 ' 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.877 --rc genhtml_branch_coverage=1 00:06:00.877 --rc genhtml_function_coverage=1 00:06:00.877 --rc genhtml_legend=1 00:06:00.877 --rc geninfo_all_blocks=1 00:06:00.877 --rc geninfo_unexecuted_blocks=1 00:06:00.877 00:06:00.877 ' 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.877 --rc genhtml_branch_coverage=1 00:06:00.877 --rc genhtml_function_coverage=1 00:06:00.877 --rc genhtml_legend=1 00:06:00.877 --rc geninfo_all_blocks=1 00:06:00.877 --rc geninfo_unexecuted_blocks=1 00:06:00.877 00:06:00.877 ' 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.877 --rc genhtml_branch_coverage=1 00:06:00.877 --rc genhtml_function_coverage=1 00:06:00.877 --rc genhtml_legend=1 00:06:00.877 --rc geninfo_all_blocks=1 00:06:00.877 --rc geninfo_unexecuted_blocks=1 00:06:00.877 00:06:00.877 ' 00:06:00.877 09:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:00.877 09:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60424 00:06:00.877 09:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.877 09:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60424 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 60424 ']' 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.877 09:07:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.877 [2024-12-13 09:07:54.699743] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:00.877 [2024-12-13 09:07:54.699930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60424 ] 00:06:01.136 [2024-12-13 09:07:54.879919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.136 [2024-12-13 09:07:54.967021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.395 [2024-12-13 09:07:55.153545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.964 09:07:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.964 09:07:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.964 09:07:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:02.222 09:07:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60424 00:06:02.222 09:07:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 60424 ']' 00:06:02.222 09:07:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 60424 00:06:02.222 09:07:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:02.223 09:07:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.223 09:07:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60424 00:06:02.223 09:07:56 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.223 killing process with pid 60424 00:06:02.223 09:07:56 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.223 09:07:56 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60424' 00:06:02.223 09:07:56 alias_rpc -- common/autotest_common.sh@973 -- # kill 60424 00:06:02.223 09:07:56 alias_rpc -- common/autotest_common.sh@978 -- # wait 60424 00:06:04.133 ************************************ 00:06:04.133 END TEST alias_rpc 00:06:04.133 ************************************ 00:06:04.133 00:06:04.133 real 0m3.409s 00:06:04.133 user 0m3.691s 00:06:04.133 sys 0m0.512s 00:06:04.133 09:07:57 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.133 09:07:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.133 09:07:57 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:04.133 09:07:57 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:04.133 09:07:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.133 09:07:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.133 09:07:57 -- common/autotest_common.sh@10 -- # set +x 00:06:04.133 ************************************ 00:06:04.133 START TEST spdkcli_tcp 00:06:04.133 ************************************ 00:06:04.133 09:07:57 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:04.133 * Looking for test storage... 00:06:04.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:04.133 09:07:57 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.133 09:07:57 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.133 09:07:57 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.394 09:07:58 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.394 09:07:58 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.395 09:07:58 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.395 --rc genhtml_branch_coverage=1 00:06:04.395 --rc genhtml_function_coverage=1 00:06:04.395 --rc genhtml_legend=1 00:06:04.395 --rc geninfo_all_blocks=1 00:06:04.395 --rc geninfo_unexecuted_blocks=1 00:06:04.395 00:06:04.395 ' 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.395 --rc genhtml_branch_coverage=1 00:06:04.395 --rc genhtml_function_coverage=1 00:06:04.395 --rc genhtml_legend=1 00:06:04.395 --rc geninfo_all_blocks=1 00:06:04.395 --rc geninfo_unexecuted_blocks=1 00:06:04.395 00:06:04.395 ' 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.395 --rc genhtml_branch_coverage=1 00:06:04.395 --rc genhtml_function_coverage=1 00:06:04.395 --rc genhtml_legend=1 00:06:04.395 --rc geninfo_all_blocks=1 00:06:04.395 --rc geninfo_unexecuted_blocks=1 00:06:04.395 00:06:04.395 ' 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.395 --rc genhtml_branch_coverage=1 00:06:04.395 --rc genhtml_function_coverage=1 00:06:04.395 --rc genhtml_legend=1 00:06:04.395 --rc geninfo_all_blocks=1 00:06:04.395 --rc geninfo_unexecuted_blocks=1 00:06:04.395 00:06:04.395 ' 00:06:04.395 09:07:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:04.395 09:07:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:04.395 09:07:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:04.395 09:07:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:04.395 09:07:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:04.395 09:07:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:04.395 09:07:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.395 09:07:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60520 00:06:04.395 09:07:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60520 00:06:04.395 09:07:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 60520 ']' 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.395 09:07:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.395 [2024-12-13 09:07:58.179032] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:04.395 [2024-12-13 09:07:58.179499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60520 ] 00:06:04.692 [2024-12-13 09:07:58.356694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.692 [2024-12-13 09:07:58.441986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.692 [2024-12-13 09:07:58.441999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.952 [2024-12-13 09:07:58.640469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.521 09:07:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.521 09:07:59 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:05.521 09:07:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60537 00:06:05.521 09:07:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:05.521 09:07:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:05.521 [ 00:06:05.521 "bdev_malloc_delete", 00:06:05.521 "bdev_malloc_create", 00:06:05.521 "bdev_null_resize", 00:06:05.521 "bdev_null_delete", 00:06:05.521 "bdev_null_create", 00:06:05.521 "bdev_nvme_cuse_unregister", 00:06:05.521 "bdev_nvme_cuse_register", 00:06:05.521 "bdev_opal_new_user", 00:06:05.521 "bdev_opal_set_lock_state", 00:06:05.521 "bdev_opal_delete", 00:06:05.521 "bdev_opal_get_info", 00:06:05.521 "bdev_opal_create", 00:06:05.521 "bdev_nvme_opal_revert", 00:06:05.521 "bdev_nvme_opal_init", 00:06:05.521 "bdev_nvme_send_cmd", 00:06:05.521 "bdev_nvme_set_keys", 00:06:05.521 "bdev_nvme_get_path_iostat", 00:06:05.521 "bdev_nvme_get_mdns_discovery_info", 00:06:05.521 "bdev_nvme_stop_mdns_discovery", 00:06:05.521 "bdev_nvme_start_mdns_discovery", 00:06:05.521 "bdev_nvme_set_multipath_policy", 00:06:05.521 "bdev_nvme_set_preferred_path", 00:06:05.521 "bdev_nvme_get_io_paths", 00:06:05.521 "bdev_nvme_remove_error_injection", 00:06:05.521 "bdev_nvme_add_error_injection", 00:06:05.521 "bdev_nvme_get_discovery_info", 00:06:05.521 "bdev_nvme_stop_discovery", 00:06:05.521 "bdev_nvme_start_discovery", 00:06:05.521 "bdev_nvme_get_controller_health_info", 00:06:05.521 "bdev_nvme_disable_controller", 00:06:05.521 "bdev_nvme_enable_controller", 00:06:05.521 "bdev_nvme_reset_controller", 00:06:05.521 "bdev_nvme_get_transport_statistics", 00:06:05.521 "bdev_nvme_apply_firmware", 00:06:05.521 "bdev_nvme_detach_controller", 00:06:05.521 "bdev_nvme_get_controllers", 00:06:05.521 "bdev_nvme_attach_controller", 00:06:05.521 "bdev_nvme_set_hotplug", 00:06:05.521 "bdev_nvme_set_options", 00:06:05.521 "bdev_passthru_delete", 00:06:05.521 "bdev_passthru_create", 00:06:05.521 "bdev_lvol_set_parent_bdev", 00:06:05.521 "bdev_lvol_set_parent", 00:06:05.521 "bdev_lvol_check_shallow_copy", 00:06:05.521 "bdev_lvol_start_shallow_copy", 00:06:05.521 "bdev_lvol_grow_lvstore", 00:06:05.521 "bdev_lvol_get_lvols", 00:06:05.521 "bdev_lvol_get_lvstores", 00:06:05.521 "bdev_lvol_delete", 00:06:05.521 "bdev_lvol_set_read_only", 00:06:05.521 "bdev_lvol_resize", 00:06:05.521 "bdev_lvol_decouple_parent", 00:06:05.521 "bdev_lvol_inflate", 00:06:05.521 "bdev_lvol_rename", 00:06:05.521 "bdev_lvol_clone_bdev", 00:06:05.521 "bdev_lvol_clone", 00:06:05.521 "bdev_lvol_snapshot", 00:06:05.521 "bdev_lvol_create", 00:06:05.521 "bdev_lvol_delete_lvstore", 00:06:05.521 "bdev_lvol_rename_lvstore", 00:06:05.521 "bdev_lvol_create_lvstore", 00:06:05.521 "bdev_raid_set_options", 00:06:05.521 "bdev_raid_remove_base_bdev", 00:06:05.521 "bdev_raid_add_base_bdev", 00:06:05.521 "bdev_raid_delete", 00:06:05.521 "bdev_raid_create", 00:06:05.521 "bdev_raid_get_bdevs", 00:06:05.521 "bdev_error_inject_error", 00:06:05.521 "bdev_error_delete", 00:06:05.521 "bdev_error_create", 00:06:05.521 "bdev_split_delete", 00:06:05.521 "bdev_split_create", 00:06:05.521 "bdev_delay_delete", 00:06:05.521 "bdev_delay_create", 00:06:05.521 "bdev_delay_update_latency", 00:06:05.521 "bdev_zone_block_delete", 00:06:05.521 "bdev_zone_block_create", 00:06:05.521 "blobfs_create", 00:06:05.521 "blobfs_detect", 00:06:05.521 "blobfs_set_cache_size", 00:06:05.521 "bdev_aio_delete", 00:06:05.521 "bdev_aio_rescan", 00:06:05.521 "bdev_aio_create", 00:06:05.521 "bdev_ftl_set_property", 00:06:05.521 "bdev_ftl_get_properties", 00:06:05.521 "bdev_ftl_get_stats", 00:06:05.521 "bdev_ftl_unmap", 00:06:05.521 "bdev_ftl_unload", 00:06:05.521 "bdev_ftl_delete", 00:06:05.521 "bdev_ftl_load", 00:06:05.521 "bdev_ftl_create", 00:06:05.521 "bdev_virtio_attach_controller", 00:06:05.521 "bdev_virtio_scsi_get_devices", 00:06:05.521 "bdev_virtio_detach_controller", 00:06:05.521 "bdev_virtio_blk_set_hotplug", 00:06:05.521 "bdev_iscsi_delete", 00:06:05.521 "bdev_iscsi_create", 00:06:05.521 "bdev_iscsi_set_options", 00:06:05.521 "bdev_uring_delete", 00:06:05.521 "bdev_uring_rescan", 00:06:05.521 "bdev_uring_create", 00:06:05.521 "accel_error_inject_error", 00:06:05.521 "ioat_scan_accel_module", 00:06:05.521 "dsa_scan_accel_module", 00:06:05.521 "iaa_scan_accel_module", 00:06:05.521 "vfu_virtio_create_fs_endpoint", 00:06:05.521 "vfu_virtio_create_scsi_endpoint", 00:06:05.521 "vfu_virtio_scsi_remove_target", 00:06:05.521 "vfu_virtio_scsi_add_target", 00:06:05.521 "vfu_virtio_create_blk_endpoint", 00:06:05.521 "vfu_virtio_delete_endpoint", 00:06:05.521 "keyring_file_remove_key", 00:06:05.521 "keyring_file_add_key", 00:06:05.521 "keyring_linux_set_options", 00:06:05.521 "fsdev_aio_delete", 00:06:05.521 "fsdev_aio_create", 00:06:05.521 "iscsi_get_histogram", 00:06:05.521 "iscsi_enable_histogram", 00:06:05.521 "iscsi_set_options", 00:06:05.521 "iscsi_get_auth_groups", 00:06:05.521 "iscsi_auth_group_remove_secret", 00:06:05.521 "iscsi_auth_group_add_secret", 00:06:05.521 "iscsi_delete_auth_group", 00:06:05.521 "iscsi_create_auth_group", 00:06:05.521 "iscsi_set_discovery_auth", 00:06:05.521 "iscsi_get_options", 00:06:05.521 "iscsi_target_node_request_logout", 00:06:05.521 "iscsi_target_node_set_redirect", 00:06:05.521 "iscsi_target_node_set_auth", 00:06:05.521 "iscsi_target_node_add_lun", 00:06:05.521 "iscsi_get_stats", 00:06:05.521 "iscsi_get_connections", 00:06:05.521 "iscsi_portal_group_set_auth", 00:06:05.521 "iscsi_start_portal_group", 00:06:05.521 "iscsi_delete_portal_group", 00:06:05.521 "iscsi_create_portal_group", 00:06:05.521 "iscsi_get_portal_groups", 00:06:05.521 "iscsi_delete_target_node", 00:06:05.521 "iscsi_target_node_remove_pg_ig_maps", 00:06:05.521 "iscsi_target_node_add_pg_ig_maps", 00:06:05.521 "iscsi_create_target_node", 00:06:05.521 "iscsi_get_target_nodes", 00:06:05.521 "iscsi_delete_initiator_group", 00:06:05.521 "iscsi_initiator_group_remove_initiators", 00:06:05.521 "iscsi_initiator_group_add_initiators", 00:06:05.521 "iscsi_create_initiator_group", 00:06:05.521 "iscsi_get_initiator_groups", 00:06:05.521 "nvmf_set_crdt", 00:06:05.521 "nvmf_set_config", 00:06:05.521 "nvmf_set_max_subsystems", 00:06:05.521 "nvmf_stop_mdns_prr", 00:06:05.521 "nvmf_publish_mdns_prr", 00:06:05.521 "nvmf_subsystem_get_listeners", 00:06:05.521 "nvmf_subsystem_get_qpairs", 00:06:05.521 "nvmf_subsystem_get_controllers", 00:06:05.521 "nvmf_get_stats", 00:06:05.521 "nvmf_get_transports", 00:06:05.521 "nvmf_create_transport", 00:06:05.521 "nvmf_get_targets", 00:06:05.521 "nvmf_delete_target", 00:06:05.521 "nvmf_create_target", 00:06:05.521 "nvmf_subsystem_allow_any_host", 00:06:05.521 "nvmf_subsystem_set_keys", 00:06:05.521 "nvmf_subsystem_remove_host", 00:06:05.521 "nvmf_subsystem_add_host", 00:06:05.521 "nvmf_ns_remove_host", 00:06:05.521 "nvmf_ns_add_host", 00:06:05.521 "nvmf_subsystem_remove_ns", 00:06:05.521 "nvmf_subsystem_set_ns_ana_group", 00:06:05.521 "nvmf_subsystem_add_ns", 00:06:05.521 "nvmf_subsystem_listener_set_ana_state", 00:06:05.521 "nvmf_discovery_get_referrals", 00:06:05.521 "nvmf_discovery_remove_referral", 00:06:05.521 "nvmf_discovery_add_referral", 00:06:05.521 "nvmf_subsystem_remove_listener", 00:06:05.521 "nvmf_subsystem_add_listener", 00:06:05.521 "nvmf_delete_subsystem", 00:06:05.521 "nvmf_create_subsystem", 00:06:05.521 "nvmf_get_subsystems", 00:06:05.521 "env_dpdk_get_mem_stats", 00:06:05.521 "nbd_get_disks", 00:06:05.521 "nbd_stop_disk", 00:06:05.521 "nbd_start_disk", 00:06:05.521 "ublk_recover_disk", 00:06:05.521 "ublk_get_disks", 00:06:05.521 "ublk_stop_disk", 00:06:05.521 "ublk_start_disk", 00:06:05.521 "ublk_destroy_target", 00:06:05.521 "ublk_create_target", 00:06:05.521 "virtio_blk_create_transport", 00:06:05.521 "virtio_blk_get_transports", 00:06:05.521 "vhost_controller_set_coalescing", 00:06:05.521 "vhost_get_controllers", 00:06:05.521 "vhost_delete_controller", 00:06:05.521 "vhost_create_blk_controller", 00:06:05.522 "vhost_scsi_controller_remove_target", 00:06:05.522 "vhost_scsi_controller_add_target", 00:06:05.522 "vhost_start_scsi_controller", 00:06:05.522 "vhost_create_scsi_controller", 00:06:05.522 "thread_set_cpumask", 00:06:05.522 "scheduler_set_options", 00:06:05.522 "framework_get_governor", 00:06:05.522 "framework_get_scheduler", 00:06:05.522 "framework_set_scheduler", 00:06:05.522 "framework_get_reactors", 00:06:05.522 "thread_get_io_channels", 00:06:05.522 "thread_get_pollers", 00:06:05.522 "thread_get_stats", 00:06:05.522 "framework_monitor_context_switch", 00:06:05.522 "spdk_kill_instance", 00:06:05.522 "log_enable_timestamps", 00:06:05.522 "log_get_flags", 00:06:05.522 "log_clear_flag", 00:06:05.522 "log_set_flag", 00:06:05.522 "log_get_level", 00:06:05.522 "log_set_level", 00:06:05.522 "log_get_print_level", 00:06:05.522 "log_set_print_level", 00:06:05.522 "framework_enable_cpumask_locks", 00:06:05.522 "framework_disable_cpumask_locks", 00:06:05.522 "framework_wait_init", 00:06:05.522 "framework_start_init", 00:06:05.522 "scsi_get_devices", 00:06:05.522 "bdev_get_histogram", 00:06:05.522 "bdev_enable_histogram", 00:06:05.522 "bdev_set_qos_limit", 00:06:05.522 "bdev_set_qd_sampling_period", 00:06:05.522 "bdev_get_bdevs", 00:06:05.522 "bdev_reset_iostat", 00:06:05.522 "bdev_get_iostat", 00:06:05.522 "bdev_examine", 00:06:05.522 "bdev_wait_for_examine", 00:06:05.522 "bdev_set_options", 00:06:05.522 "accel_get_stats", 00:06:05.522 "accel_set_options", 00:06:05.522 "accel_set_driver", 00:06:05.522 "accel_crypto_key_destroy", 00:06:05.522 "accel_crypto_keys_get", 00:06:05.522 "accel_crypto_key_create", 00:06:05.522 "accel_assign_opc", 00:06:05.522 "accel_get_module_info", 00:06:05.522 "accel_get_opc_assignments", 00:06:05.522 "vmd_rescan", 00:06:05.522 "vmd_remove_device", 00:06:05.522 "vmd_enable", 00:06:05.522 "sock_get_default_impl", 00:06:05.522 "sock_set_default_impl", 00:06:05.522 "sock_impl_set_options", 00:06:05.522 "sock_impl_get_options", 00:06:05.522 "iobuf_get_stats", 00:06:05.522 "iobuf_set_options", 00:06:05.522 "keyring_get_keys", 00:06:05.522 "vfu_tgt_set_base_path", 00:06:05.522 "framework_get_pci_devices", 00:06:05.522 "framework_get_config", 00:06:05.522 "framework_get_subsystems", 00:06:05.522 "fsdev_set_opts", 00:06:05.522 "fsdev_get_opts", 00:06:05.522 "trace_get_info", 00:06:05.522 "trace_get_tpoint_group_mask", 00:06:05.522 "trace_disable_tpoint_group", 00:06:05.522 "trace_enable_tpoint_group", 00:06:05.522 "trace_clear_tpoint_mask", 00:06:05.522 "trace_set_tpoint_mask", 00:06:05.522 "notify_get_notifications", 00:06:05.522 "notify_get_types", 00:06:05.522 "spdk_get_version", 00:06:05.522 "rpc_get_methods" 00:06:05.522 ] 00:06:05.782 09:07:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.782 09:07:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:05.782 09:07:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60520 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 60520 ']' 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 60520 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60520 00:06:05.782 killing process with pid 60520 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60520' 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 60520 00:06:05.782 09:07:59 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 60520 00:06:07.684 ************************************ 00:06:07.684 END TEST spdkcli_tcp 00:06:07.684 ************************************ 00:06:07.684 00:06:07.684 real 0m3.556s 00:06:07.684 user 0m6.458s 00:06:07.684 sys 0m0.509s 00:06:07.684 09:08:01 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.684 09:08:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.684 09:08:01 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.684 09:08:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.684 09:08:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.684 09:08:01 -- common/autotest_common.sh@10 -- # set +x 00:06:07.684 ************************************ 00:06:07.684 START TEST dpdk_mem_utility 00:06:07.684 ************************************ 00:06:07.684 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.684 * Looking for test storage... 00:06:07.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:07.684 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.684 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.684 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.943 09:08:01 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.943 --rc genhtml_branch_coverage=1 00:06:07.943 --rc genhtml_function_coverage=1 00:06:07.943 --rc genhtml_legend=1 00:06:07.943 --rc geninfo_all_blocks=1 00:06:07.943 --rc geninfo_unexecuted_blocks=1 00:06:07.943 00:06:07.943 ' 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.943 --rc genhtml_branch_coverage=1 00:06:07.943 --rc genhtml_function_coverage=1 00:06:07.943 --rc genhtml_legend=1 00:06:07.943 --rc geninfo_all_blocks=1 00:06:07.943 --rc geninfo_unexecuted_blocks=1 00:06:07.943 00:06:07.943 ' 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.943 --rc genhtml_branch_coverage=1 00:06:07.943 --rc genhtml_function_coverage=1 00:06:07.943 --rc genhtml_legend=1 00:06:07.943 --rc geninfo_all_blocks=1 00:06:07.943 --rc geninfo_unexecuted_blocks=1 00:06:07.943 00:06:07.943 ' 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.943 --rc genhtml_branch_coverage=1 00:06:07.943 --rc genhtml_function_coverage=1 00:06:07.943 --rc genhtml_legend=1 00:06:07.943 --rc geninfo_all_blocks=1 00:06:07.943 --rc geninfo_unexecuted_blocks=1 00:06:07.943 00:06:07.943 ' 00:06:07.943 09:08:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:07.943 09:08:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60631 00:06:07.943 09:08:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:07.943 09:08:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60631 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60631 ']' 00:06:07.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.943 09:08:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:07.943 [2024-12-13 09:08:01.726758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:07.943 [2024-12-13 09:08:01.727055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60631 ] 00:06:08.203 [2024-12-13 09:08:01.899007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.203 [2024-12-13 09:08:02.000099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.462 [2024-12-13 09:08:02.209752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.031 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.031 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:09.031 09:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.031 09:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.031 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.031 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.031 { 00:06:09.031 "filename": "/tmp/spdk_mem_dump.txt" 00:06:09.031 } 00:06:09.031 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.031 09:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:09.031 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:09.031 1 heaps totaling size 824.000000 MiB 00:06:09.031 size: 824.000000 MiB heap id: 0 00:06:09.031 end heaps---------- 00:06:09.031 9 mempools totaling size 603.782043 MiB 00:06:09.031 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:09.031 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:09.031 size: 100.555481 MiB name: bdev_io_60631 00:06:09.031 size: 50.003479 MiB name: msgpool_60631 00:06:09.031 size: 36.509338 MiB name: fsdev_io_60631 00:06:09.031 size: 21.763794 MiB name: PDU_Pool 00:06:09.031 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:09.031 size: 4.133484 MiB name: evtpool_60631 00:06:09.031 size: 0.026123 MiB name: Session_Pool 00:06:09.031 end mempools------- 00:06:09.031 6 memzones totaling size 4.142822 MiB 00:06:09.031 size: 1.000366 MiB name: RG_ring_0_60631 00:06:09.031 size: 1.000366 MiB name: RG_ring_1_60631 00:06:09.031 size: 1.000366 MiB name: RG_ring_4_60631 00:06:09.031 size: 1.000366 MiB name: RG_ring_5_60631 00:06:09.031 size: 0.125366 MiB name: RG_ring_2_60631 00:06:09.031 size: 0.015991 MiB name: RG_ring_3_60631 00:06:09.031 end memzones------- 00:06:09.031 09:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:09.031 heap id: 0 total size: 824.000000 MiB number of busy elements: 322 number of free elements: 18 00:06:09.031 list of free elements. size: 16.779663 MiB 00:06:09.031 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:09.031 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:09.031 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:09.031 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:09.031 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:09.031 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:09.031 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:09.031 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:09.031 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:09.031 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:09.031 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:09.031 element at address: 0x20001b400000 with size: 0.561218 MiB 00:06:09.031 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:09.031 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:09.031 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:09.031 element at address: 0x200012c00000 with size: 0.433228 MiB 00:06:09.031 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:09.031 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:09.031 list of standard malloc elements. size: 199.289429 MiB 00:06:09.031 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:09.031 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:09.031 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:09.031 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:09.031 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:09.031 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:09.031 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:09.031 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:09.031 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:09.031 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:09.031 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:09.031 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:09.031 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:09.032 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:09.033 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:09.033 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:09.033 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:09.033 list of memzone associated elements. size: 607.930908 MiB 00:06:09.033 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:09.033 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:09.033 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:09.033 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:09.033 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:09.033 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60631_0 00:06:09.033 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:09.033 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60631_0 00:06:09.033 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:09.033 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60631_0 00:06:09.033 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:09.033 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:09.033 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:09.033 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:09.033 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:09.033 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60631_0 00:06:09.033 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:09.033 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60631 00:06:09.033 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:09.033 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60631 00:06:09.034 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:09.034 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:09.034 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:09.034 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:09.034 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:09.034 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:09.034 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:09.034 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:09.034 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:09.034 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60631 00:06:09.034 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:09.034 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60631 00:06:09.034 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:09.034 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60631 00:06:09.034 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:09.034 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60631 00:06:09.034 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:09.034 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60631 00:06:09.034 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:09.034 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60631 00:06:09.034 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:09.034 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:09.034 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:09.034 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:09.034 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:09.034 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:09.034 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:09.034 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60631 00:06:09.034 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:09.034 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60631 00:06:09.034 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:09.034 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:09.034 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:09.034 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:09.034 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:09.034 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60631 00:06:09.034 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:09.034 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:09.034 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:09.034 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60631 00:06:09.034 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:09.034 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60631 00:06:09.034 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:09.034 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60631 00:06:09.034 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:09.034 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:09.034 09:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:09.034 09:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60631 00:06:09.034 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60631 ']' 00:06:09.034 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60631 00:06:09.034 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:09.034 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.034 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60631 00:06:09.034 killing process with pid 60631 00:06:09.034 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.034 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.034 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60631' 00:06:09.034 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60631 00:06:09.034 09:08:02 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60631 00:06:10.940 00:06:10.940 real 0m3.185s 00:06:10.940 user 0m3.320s 00:06:10.940 sys 0m0.485s 00:06:10.940 09:08:04 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.940 ************************************ 00:06:10.940 END TEST dpdk_mem_utility 00:06:10.940 ************************************ 00:06:10.940 09:08:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.940 09:08:04 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:10.940 09:08:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.940 09:08:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.940 09:08:04 -- common/autotest_common.sh@10 -- # set +x 00:06:10.940 ************************************ 00:06:10.940 START TEST event 00:06:10.940 ************************************ 00:06:10.940 09:08:04 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:10.940 * Looking for test storage... 00:06:10.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.940 09:08:04 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.940 09:08:04 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.940 09:08:04 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.200 09:08:04 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.200 09:08:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.200 09:08:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.200 09:08:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.200 09:08:04 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.200 09:08:04 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.200 09:08:04 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.200 09:08:04 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.200 09:08:04 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.200 09:08:04 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.200 09:08:04 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.200 09:08:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.200 09:08:04 event -- scripts/common.sh@344 -- # case "$op" in 00:06:11.200 09:08:04 event -- scripts/common.sh@345 -- # : 1 00:06:11.200 09:08:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.200 09:08:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.200 09:08:04 event -- scripts/common.sh@365 -- # decimal 1 00:06:11.200 09:08:04 event -- scripts/common.sh@353 -- # local d=1 00:06:11.200 09:08:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.200 09:08:04 event -- scripts/common.sh@355 -- # echo 1 00:06:11.200 09:08:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.200 09:08:04 event -- scripts/common.sh@366 -- # decimal 2 00:06:11.200 09:08:04 event -- scripts/common.sh@353 -- # local d=2 00:06:11.200 09:08:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.200 09:08:04 event -- scripts/common.sh@355 -- # echo 2 00:06:11.200 09:08:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.200 09:08:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.200 09:08:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.200 09:08:04 event -- scripts/common.sh@368 -- # return 0 00:06:11.200 09:08:04 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.200 09:08:04 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.200 --rc genhtml_branch_coverage=1 00:06:11.200 --rc genhtml_function_coverage=1 00:06:11.200 --rc genhtml_legend=1 00:06:11.200 --rc geninfo_all_blocks=1 00:06:11.200 --rc geninfo_unexecuted_blocks=1 00:06:11.200 00:06:11.200 ' 00:06:11.200 09:08:04 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.200 --rc genhtml_branch_coverage=1 00:06:11.200 --rc genhtml_function_coverage=1 00:06:11.200 --rc genhtml_legend=1 00:06:11.200 --rc geninfo_all_blocks=1 00:06:11.200 --rc geninfo_unexecuted_blocks=1 00:06:11.200 00:06:11.200 ' 00:06:11.200 09:08:04 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.200 --rc genhtml_branch_coverage=1 00:06:11.200 --rc genhtml_function_coverage=1 00:06:11.200 --rc genhtml_legend=1 00:06:11.200 --rc geninfo_all_blocks=1 00:06:11.200 --rc geninfo_unexecuted_blocks=1 00:06:11.200 00:06:11.200 ' 00:06:11.200 09:08:04 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.200 --rc genhtml_branch_coverage=1 00:06:11.200 --rc genhtml_function_coverage=1 00:06:11.201 --rc genhtml_legend=1 00:06:11.201 --rc geninfo_all_blocks=1 00:06:11.201 --rc geninfo_unexecuted_blocks=1 00:06:11.201 00:06:11.201 ' 00:06:11.201 09:08:04 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:11.201 09:08:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:11.201 09:08:04 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:11.201 09:08:04 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:11.201 09:08:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.201 09:08:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.201 ************************************ 00:06:11.201 START TEST event_perf 00:06:11.201 ************************************ 00:06:11.201 09:08:04 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:11.201 Running I/O for 1 seconds...[2024-12-13 09:08:04.949777] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:11.201 [2024-12-13 09:08:04.949914] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60735 ] 00:06:11.459 [2024-12-13 09:08:05.117950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.459 [2024-12-13 09:08:05.222902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.459 [2024-12-13 09:08:05.223040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.459 [2024-12-13 09:08:05.223152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.459 [2024-12-13 09:08:05.223169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.834 Running I/O for 1 seconds... 00:06:12.834 lcore 0: 195197 00:06:12.834 lcore 1: 195198 00:06:12.834 lcore 2: 195198 00:06:12.834 lcore 3: 195199 00:06:12.834 done. 00:06:12.834 00:06:12.834 real 0m1.559s 00:06:12.834 user 0m4.315s 00:06:12.834 sys 0m0.100s 00:06:12.834 ************************************ 00:06:12.834 END TEST event_perf 00:06:12.834 ************************************ 00:06:12.834 09:08:06 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.834 09:08:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.834 09:08:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:12.834 09:08:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:12.834 09:08:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.834 09:08:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.834 ************************************ 00:06:12.834 START TEST event_reactor 00:06:12.834 ************************************ 00:06:12.834 09:08:06 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:12.834 [2024-12-13 09:08:06.540016] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:12.834 [2024-12-13 09:08:06.540164] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60773 ] 00:06:13.093 [2024-12-13 09:08:06.722074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.093 [2024-12-13 09:08:06.817765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.471 test_start 00:06:14.471 oneshot 00:06:14.471 tick 100 00:06:14.471 tick 100 00:06:14.471 tick 250 00:06:14.471 tick 100 00:06:14.471 tick 100 00:06:14.471 tick 100 00:06:14.471 tick 250 00:06:14.471 tick 500 00:06:14.471 tick 100 00:06:14.471 tick 100 00:06:14.471 tick 250 00:06:14.471 tick 100 00:06:14.471 tick 100 00:06:14.471 test_end 00:06:14.471 ************************************ 00:06:14.471 END TEST event_reactor 00:06:14.471 00:06:14.471 real 0m1.541s 00:06:14.471 user 0m1.341s 00:06:14.471 sys 0m0.088s 00:06:14.471 09:08:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.471 09:08:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:14.471 ************************************ 00:06:14.471 09:08:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:14.471 09:08:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:14.471 09:08:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.471 09:08:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.471 ************************************ 00:06:14.471 START TEST event_reactor_perf 00:06:14.471 ************************************ 00:06:14.471 09:08:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:14.471 [2024-12-13 09:08:08.131501] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:14.471 [2024-12-13 09:08:08.131656] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60815 ] 00:06:14.471 [2024-12-13 09:08:08.309037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.730 [2024-12-13 09:08:08.392902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.143 test_start 00:06:16.143 test_end 00:06:16.143 Performance: 321379 events per second 00:06:16.143 ************************************ 00:06:16.143 END TEST event_reactor_perf 00:06:16.143 ************************************ 00:06:16.143 00:06:16.143 real 0m1.502s 00:06:16.143 user 0m1.318s 00:06:16.143 sys 0m0.076s 00:06:16.143 09:08:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.144 09:08:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.144 09:08:09 event -- event/event.sh@49 -- # uname -s 00:06:16.144 09:08:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:16.144 09:08:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:16.144 09:08:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.144 09:08:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.144 09:08:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.144 ************************************ 00:06:16.144 START TEST event_scheduler 00:06:16.144 ************************************ 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:16.144 * Looking for test storage... 00:06:16.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.144 09:08:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:16.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.144 --rc genhtml_branch_coverage=1 00:06:16.144 --rc genhtml_function_coverage=1 00:06:16.144 --rc genhtml_legend=1 00:06:16.144 --rc geninfo_all_blocks=1 00:06:16.144 --rc geninfo_unexecuted_blocks=1 00:06:16.144 00:06:16.144 ' 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:16.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.144 --rc genhtml_branch_coverage=1 00:06:16.144 --rc genhtml_function_coverage=1 00:06:16.144 --rc genhtml_legend=1 00:06:16.144 --rc geninfo_all_blocks=1 00:06:16.144 --rc geninfo_unexecuted_blocks=1 00:06:16.144 00:06:16.144 ' 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:16.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.144 --rc genhtml_branch_coverage=1 00:06:16.144 --rc genhtml_function_coverage=1 00:06:16.144 --rc genhtml_legend=1 00:06:16.144 --rc geninfo_all_blocks=1 00:06:16.144 --rc geninfo_unexecuted_blocks=1 00:06:16.144 00:06:16.144 ' 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:16.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.144 --rc genhtml_branch_coverage=1 00:06:16.144 --rc genhtml_function_coverage=1 00:06:16.144 --rc genhtml_legend=1 00:06:16.144 --rc geninfo_all_blocks=1 00:06:16.144 --rc geninfo_unexecuted_blocks=1 00:06:16.144 00:06:16.144 ' 00:06:16.144 09:08:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:16.144 09:08:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60880 00:06:16.144 09:08:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.144 09:08:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60880 00:06:16.144 09:08:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60880 ']' 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.144 09:08:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.144 [2024-12-13 09:08:09.945751] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:16.144 [2024-12-13 09:08:09.946123] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60880 ] 00:06:16.403 [2024-12-13 09:08:10.134459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.403 [2024-12-13 09:08:10.266327] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.403 [2024-12-13 09:08:10.266472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.403 [2024-12-13 09:08:10.266598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.403 [2024-12-13 09:08:10.266592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.340 09:08:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.340 09:08:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:17.340 09:08:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:17.340 09:08:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.340 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.340 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.340 POWER: Cannot set governor of lcore 0 to performance 00:06:17.340 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.340 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.340 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.340 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.340 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:17.340 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:17.340 POWER: Unable to set Power Management Environment for lcore 0 00:06:17.340 [2024-12-13 09:08:10.889042] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:17.340 [2024-12-13 09:08:10.889065] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:17.340 [2024-12-13 09:08:10.889079] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:17.340 [2024-12-13 09:08:10.889103] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:17.340 [2024-12-13 09:08:10.889128] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:17.340 [2024-12-13 09:08:10.889140] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:17.340 09:08:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.340 09:08:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:17.340 09:08:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 [2024-12-13 09:08:11.051223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.340 [2024-12-13 09:08:11.141418] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:17.340 09:08:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.340 09:08:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:17.340 09:08:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.340 09:08:11 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.340 09:08:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 ************************************ 00:06:17.340 START TEST scheduler_create_thread 00:06:17.340 ************************************ 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 2 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 3 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 4 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 5 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 6 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 7 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 8 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.340 9 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.340 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.600 10 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.600 09:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.537 09:08:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.537 ************************************ 00:06:18.537 END TEST scheduler_create_thread 00:06:18.537 ************************************ 00:06:18.537 00:06:18.537 real 0m1.174s 00:06:18.537 user 0m0.014s 00:06:18.537 sys 0m0.006s 00:06:18.537 09:08:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.537 09:08:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.537 09:08:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:18.537 09:08:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60880 00:06:18.537 09:08:12 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60880 ']' 00:06:18.537 09:08:12 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60880 00:06:18.537 09:08:12 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:18.537 09:08:12 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.537 09:08:12 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60880 00:06:18.537 killing process with pid 60880 00:06:18.537 09:08:12 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:18.537 09:08:12 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:18.537 09:08:12 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60880' 00:06:18.537 09:08:12 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60880 00:06:18.537 09:08:12 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60880 00:06:19.106 [2024-12-13 09:08:12.807610] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:20.043 ************************************ 00:06:20.043 END TEST event_scheduler 00:06:20.043 ************************************ 00:06:20.043 00:06:20.043 real 0m4.043s 00:06:20.043 user 0m6.668s 00:06:20.043 sys 0m0.447s 00:06:20.043 09:08:13 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.043 09:08:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.043 09:08:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:20.043 09:08:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:20.043 09:08:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.043 09:08:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.043 09:08:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.043 ************************************ 00:06:20.043 START TEST app_repeat 00:06:20.043 ************************************ 00:06:20.043 09:08:13 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60975 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:20.043 Process app_repeat pid: 60975 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60975' 00:06:20.043 spdk_app_start Round 0 00:06:20.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:20.043 09:08:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60975 /var/tmp/spdk-nbd.sock 00:06:20.043 09:08:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60975 ']' 00:06:20.043 09:08:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.043 09:08:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.043 09:08:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.043 09:08:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.043 09:08:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.043 [2024-12-13 09:08:13.818849] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:20.043 [2024-12-13 09:08:13.819267] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60975 ] 00:06:20.302 [2024-12-13 09:08:13.997230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.302 [2024-12-13 09:08:14.085035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.302 [2024-12-13 09:08:14.085048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.561 [2024-12-13 09:08:14.243154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.129 09:08:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.129 09:08:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:21.129 09:08:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.388 Malloc0 00:06:21.388 09:08:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.647 Malloc1 00:06:21.647 09:08:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.647 09:08:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.906 /dev/nbd0 00:06:21.906 09:08:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.906 09:08:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.906 09:08:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.906 09:08:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.906 09:08:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.906 09:08:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.906 09:08:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:22.165 09:08:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.165 09:08:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.165 09:08:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.165 09:08:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.165 1+0 records in 00:06:22.165 1+0 records out 00:06:22.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613887 s, 6.7 MB/s 00:06:22.165 09:08:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.165 09:08:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.165 09:08:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.165 09:08:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.165 09:08:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.165 09:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.165 09:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.165 09:08:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.165 /dev/nbd1 00:06:22.424 09:08:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.424 09:08:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.424 1+0 records in 00:06:22.424 1+0 records out 00:06:22.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327364 s, 12.5 MB/s 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.424 09:08:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.424 09:08:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.424 09:08:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.424 09:08:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.424 09:08:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.424 09:08:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.684 { 00:06:22.684 "nbd_device": "/dev/nbd0", 00:06:22.684 "bdev_name": "Malloc0" 00:06:22.684 }, 00:06:22.684 { 00:06:22.684 "nbd_device": "/dev/nbd1", 00:06:22.684 "bdev_name": "Malloc1" 00:06:22.684 } 00:06:22.684 ]' 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.684 { 00:06:22.684 "nbd_device": "/dev/nbd0", 00:06:22.684 "bdev_name": "Malloc0" 00:06:22.684 }, 00:06:22.684 { 00:06:22.684 "nbd_device": "/dev/nbd1", 00:06:22.684 "bdev_name": "Malloc1" 00:06:22.684 } 00:06:22.684 ]' 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.684 /dev/nbd1' 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.684 /dev/nbd1' 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.684 256+0 records in 00:06:22.684 256+0 records out 00:06:22.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443181 s, 237 MB/s 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.684 256+0 records in 00:06:22.684 256+0 records out 00:06:22.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285942 s, 36.7 MB/s 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.684 256+0 records in 00:06:22.684 256+0 records out 00:06:22.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298421 s, 35.1 MB/s 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.684 09:08:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.685 09:08:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.685 09:08:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.685 09:08:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.685 09:08:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.944 09:08:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.944 09:08:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.204 09:08:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.204 09:08:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.204 09:08:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.204 09:08:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.204 09:08:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.204 09:08:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.204 09:08:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.204 09:08:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.204 09:08:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.204 09:08:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.204 09:08:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.204 09:08:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.204 09:08:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.204 09:08:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.464 09:08:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.464 09:08:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.464 09:08:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.464 09:08:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.464 09:08:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.723 09:08:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.723 09:08:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.292 09:08:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.230 [2024-12-13 09:08:18.771552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.230 [2024-12-13 09:08:18.849536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.230 [2024-12-13 09:08:18.849540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.230 [2024-12-13 09:08:18.996516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.230 [2024-12-13 09:08:18.996696] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.230 [2024-12-13 09:08:18.996757] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.136 spdk_app_start Round 1 00:06:27.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.136 09:08:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.136 09:08:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:27.136 09:08:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60975 /var/tmp/spdk-nbd.sock 00:06:27.136 09:08:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60975 ']' 00:06:27.136 09:08:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.136 09:08:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.136 09:08:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.136 09:08:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.136 09:08:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.395 09:08:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.396 09:08:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:27.396 09:08:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.655 Malloc0 00:06:27.655 09:08:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.914 Malloc1 00:06:28.174 09:08:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.174 09:08:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.174 /dev/nbd0 00:06:28.174 09:08:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.433 09:08:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.433 1+0 records in 00:06:28.433 1+0 records out 00:06:28.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250949 s, 16.3 MB/s 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.433 09:08:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:28.433 09:08:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.433 09:08:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.433 09:08:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.693 /dev/nbd1 00:06:28.693 09:08:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.693 09:08:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.693 1+0 records in 00:06:28.693 1+0 records out 00:06:28.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317003 s, 12.9 MB/s 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.693 09:08:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:28.693 09:08:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.693 09:08:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.693 09:08:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.693 09:08:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.693 09:08:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.953 { 00:06:28.953 "nbd_device": "/dev/nbd0", 00:06:28.953 "bdev_name": "Malloc0" 00:06:28.953 }, 00:06:28.953 { 00:06:28.953 "nbd_device": "/dev/nbd1", 00:06:28.953 "bdev_name": "Malloc1" 00:06:28.953 } 00:06:28.953 ]' 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.953 { 00:06:28.953 "nbd_device": "/dev/nbd0", 00:06:28.953 "bdev_name": "Malloc0" 00:06:28.953 }, 00:06:28.953 { 00:06:28.953 "nbd_device": "/dev/nbd1", 00:06:28.953 "bdev_name": "Malloc1" 00:06:28.953 } 00:06:28.953 ]' 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.953 /dev/nbd1' 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.953 /dev/nbd1' 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.953 256+0 records in 00:06:28.953 256+0 records out 00:06:28.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00784962 s, 134 MB/s 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.953 256+0 records in 00:06:28.953 256+0 records out 00:06:28.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237416 s, 44.2 MB/s 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.953 09:08:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.213 256+0 records in 00:06:29.213 256+0 records out 00:06:29.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343183 s, 30.6 MB/s 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.213 09:08:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.472 09:08:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.472 09:08:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.472 09:08:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.472 09:08:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.472 09:08:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.472 09:08:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.472 09:08:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.472 09:08:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.472 09:08:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.472 09:08:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.731 09:08:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.990 09:08:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.991 09:08:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.991 09:08:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.559 09:08:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.496 [2024-12-13 09:08:25.028520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.496 [2024-12-13 09:08:25.109975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.496 [2024-12-13 09:08:25.109976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.496 [2024-12-13 09:08:25.255996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.496 [2024-12-13 09:08:25.256101] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.496 [2024-12-13 09:08:25.256120] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.400 spdk_app_start Round 2 00:06:33.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.400 09:08:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.400 09:08:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:33.400 09:08:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60975 /var/tmp/spdk-nbd.sock 00:06:33.400 09:08:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60975 ']' 00:06:33.400 09:08:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.400 09:08:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.400 09:08:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.400 09:08:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.400 09:08:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.659 09:08:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.659 09:08:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:33.659 09:08:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.917 Malloc0 00:06:33.917 09:08:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.177 Malloc1 00:06:34.177 09:08:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.177 09:08:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.745 /dev/nbd0 00:06:34.745 09:08:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.745 09:08:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.745 1+0 records in 00:06:34.745 1+0 records out 00:06:34.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240456 s, 17.0 MB/s 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:34.745 09:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.745 09:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.745 09:08:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.745 /dev/nbd1 00:06:34.745 09:08:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.745 09:08:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.745 09:08:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:35.004 09:08:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:35.004 09:08:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.004 09:08:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.004 09:08:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.004 1+0 records in 00:06:35.004 1+0 records out 00:06:35.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303711 s, 13.5 MB/s 00:06:35.004 09:08:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.004 09:08:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:35.004 09:08:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.004 09:08:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.004 09:08:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:35.004 09:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.004 09:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.004 09:08:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.004 09:08:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.004 09:08:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.263 { 00:06:35.263 "nbd_device": "/dev/nbd0", 00:06:35.263 "bdev_name": "Malloc0" 00:06:35.263 }, 00:06:35.263 { 00:06:35.263 "nbd_device": "/dev/nbd1", 00:06:35.263 "bdev_name": "Malloc1" 00:06:35.263 } 00:06:35.263 ]' 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.263 { 00:06:35.263 "nbd_device": "/dev/nbd0", 00:06:35.263 "bdev_name": "Malloc0" 00:06:35.263 }, 00:06:35.263 { 00:06:35.263 "nbd_device": "/dev/nbd1", 00:06:35.263 "bdev_name": "Malloc1" 00:06:35.263 } 00:06:35.263 ]' 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.263 /dev/nbd1' 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.263 /dev/nbd1' 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.263 09:08:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.263 256+0 records in 00:06:35.263 256+0 records out 00:06:35.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0081874 s, 128 MB/s 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.263 256+0 records in 00:06:35.263 256+0 records out 00:06:35.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236558 s, 44.3 MB/s 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.263 256+0 records in 00:06:35.263 256+0 records out 00:06:35.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299385 s, 35.0 MB/s 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.263 09:08:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.523 09:08:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.523 09:08:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.523 09:08:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.523 09:08:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.523 09:08:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.523 09:08:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.523 09:08:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.523 09:08:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.523 09:08:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.523 09:08:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.782 09:08:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.357 09:08:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.357 09:08:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.357 09:08:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.357 09:08:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.357 09:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.357 09:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.357 09:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.357 09:08:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.357 09:08:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.357 09:08:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.357 09:08:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.357 09:08:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.357 09:08:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.683 09:08:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.620 [2024-12-13 09:08:31.253717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.620 [2024-12-13 09:08:31.334125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.620 [2024-12-13 09:08:31.334135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.620 [2024-12-13 09:08:31.481277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.620 [2024-12-13 09:08:31.481404] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.620 [2024-12-13 09:08:31.481430] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.524 09:08:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60975 /var/tmp/spdk-nbd.sock 00:06:39.524 09:08:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60975 ']' 00:06:39.524 09:08:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.524 09:08:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.524 09:08:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.524 09:08:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.524 09:08:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.783 09:08:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.783 09:08:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:39.783 09:08:33 event.app_repeat -- event/event.sh@39 -- # killprocess 60975 00:06:39.783 09:08:33 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60975 ']' 00:06:39.783 09:08:33 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60975 00:06:39.783 09:08:33 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:39.784 09:08:33 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.784 09:08:33 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60975 00:06:40.043 killing process with pid 60975 00:06:40.043 09:08:33 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.043 09:08:33 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.043 09:08:33 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60975' 00:06:40.043 09:08:33 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60975 00:06:40.043 09:08:33 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60975 00:06:40.611 spdk_app_start is called in Round 0. 00:06:40.611 Shutdown signal received, stop current app iteration 00:06:40.611 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:06:40.611 spdk_app_start is called in Round 1. 00:06:40.611 Shutdown signal received, stop current app iteration 00:06:40.611 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:06:40.611 spdk_app_start is called in Round 2. 00:06:40.611 Shutdown signal received, stop current app iteration 00:06:40.611 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:06:40.611 spdk_app_start is called in Round 3. 00:06:40.611 Shutdown signal received, stop current app iteration 00:06:40.611 ************************************ 00:06:40.611 END TEST app_repeat 00:06:40.611 ************************************ 00:06:40.611 09:08:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:40.611 09:08:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:40.611 00:06:40.611 real 0m20.722s 00:06:40.611 user 0m46.481s 00:06:40.611 sys 0m2.556s 00:06:40.611 09:08:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.611 09:08:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.871 09:08:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:40.871 09:08:34 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:40.871 09:08:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.871 09:08:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.871 09:08:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.871 ************************************ 00:06:40.871 START TEST cpu_locks 00:06:40.871 ************************************ 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:40.871 * Looking for test storage... 00:06:40.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.871 09:08:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.871 --rc genhtml_branch_coverage=1 00:06:40.871 --rc genhtml_function_coverage=1 00:06:40.871 --rc genhtml_legend=1 00:06:40.871 --rc geninfo_all_blocks=1 00:06:40.871 --rc geninfo_unexecuted_blocks=1 00:06:40.871 00:06:40.871 ' 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.871 --rc genhtml_branch_coverage=1 00:06:40.871 --rc genhtml_function_coverage=1 00:06:40.871 --rc genhtml_legend=1 00:06:40.871 --rc geninfo_all_blocks=1 00:06:40.871 --rc geninfo_unexecuted_blocks=1 00:06:40.871 00:06:40.871 ' 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.871 --rc genhtml_branch_coverage=1 00:06:40.871 --rc genhtml_function_coverage=1 00:06:40.871 --rc genhtml_legend=1 00:06:40.871 --rc geninfo_all_blocks=1 00:06:40.871 --rc geninfo_unexecuted_blocks=1 00:06:40.871 00:06:40.871 ' 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.871 --rc genhtml_branch_coverage=1 00:06:40.871 --rc genhtml_function_coverage=1 00:06:40.871 --rc genhtml_legend=1 00:06:40.871 --rc geninfo_all_blocks=1 00:06:40.871 --rc geninfo_unexecuted_blocks=1 00:06:40.871 00:06:40.871 ' 00:06:40.871 09:08:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:40.871 09:08:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:40.871 09:08:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:40.871 09:08:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.871 09:08:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.871 ************************************ 00:06:40.871 START TEST default_locks 00:06:40.871 ************************************ 00:06:40.871 09:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:40.871 09:08:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61439 00:06:40.871 09:08:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61439 00:06:40.871 09:08:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.871 09:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61439 ']' 00:06:40.871 09:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.871 09:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.871 09:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.871 09:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.871 09:08:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.129 [2024-12-13 09:08:34.864823] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:41.129 [2024-12-13 09:08:34.865239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61439 ] 00:06:41.386 [2024-12-13 09:08:35.041238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.386 [2024-12-13 09:08:35.123069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.644 [2024-12-13 09:08:35.312483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.210 09:08:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.210 09:08:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:42.210 09:08:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61439 00:06:42.210 09:08:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61439 00:06:42.210 09:08:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61439 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 61439 ']' 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 61439 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61439 00:06:42.468 killing process with pid 61439 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61439' 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 61439 00:06:42.468 09:08:36 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 61439 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61439 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61439 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:44.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.382 ERROR: process (pid: 61439) is no longer running 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 61439 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61439 ']' 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.382 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61439) - No such process 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.382 09:08:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:44.382 09:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.382 09:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.382 09:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.382 00:06:44.382 real 0m3.271s 00:06:44.382 user 0m3.398s 00:06:44.382 sys 0m0.604s 00:06:44.382 09:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.382 ************************************ 00:06:44.382 END TEST default_locks 00:06:44.382 ************************************ 00:06:44.382 09:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.382 09:08:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:44.382 09:08:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.382 09:08:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.382 09:08:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.382 ************************************ 00:06:44.382 START TEST default_locks_via_rpc 00:06:44.382 ************************************ 00:06:44.382 09:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:44.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.382 09:08:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61503 00:06:44.382 09:08:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61503 00:06:44.382 09:08:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.382 09:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61503 ']' 00:06:44.382 09:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.382 09:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.382 09:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.382 09:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.382 09:08:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.382 [2024-12-13 09:08:38.183444] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:44.382 [2024-12-13 09:08:38.183615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61503 ] 00:06:44.641 [2024-12-13 09:08:38.363121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.641 [2024-12-13 09:08:38.449512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.900 [2024-12-13 09:08:38.638826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61503 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61503 00:06:45.468 09:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61503 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61503 ']' 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61503 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61503 00:06:45.727 killing process with pid 61503 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61503' 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61503 00:06:45.727 09:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61503 00:06:47.632 ************************************ 00:06:47.632 END TEST default_locks_via_rpc 00:06:47.632 ************************************ 00:06:47.632 00:06:47.632 real 0m3.172s 00:06:47.632 user 0m3.317s 00:06:47.632 sys 0m0.525s 00:06:47.632 09:08:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.632 09:08:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.632 09:08:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:47.632 09:08:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.632 09:08:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.632 09:08:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.632 ************************************ 00:06:47.632 START TEST non_locking_app_on_locked_coremask 00:06:47.632 ************************************ 00:06:47.632 09:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:47.632 09:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61566 00:06:47.632 09:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61566 /var/tmp/spdk.sock 00:06:47.632 09:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.632 09:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61566 ']' 00:06:47.632 09:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.632 09:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.632 09:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.632 09:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.632 09:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.632 [2024-12-13 09:08:41.417958] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:47.632 [2024-12-13 09:08:41.418133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61566 ] 00:06:47.891 [2024-12-13 09:08:41.600172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.891 [2024-12-13 09:08:41.690994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.150 [2024-12-13 09:08:41.881288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.718 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.718 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.718 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61582 00:06:48.718 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:48.718 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61582 /var/tmp/spdk2.sock 00:06:48.719 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61582 ']' 00:06:48.719 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.719 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.719 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.719 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.719 09:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.719 [2024-12-13 09:08:42.489840] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:48.719 [2024-12-13 09:08:42.489992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61582 ] 00:06:48.978 [2024-12-13 09:08:42.673720] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.978 [2024-12-13 09:08:42.673782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.978 [2024-12-13 09:08:42.852196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.546 [2024-12-13 09:08:43.237203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.483 09:08:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.483 09:08:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.483 09:08:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61566 00:06:50.483 09:08:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61566 00:06:50.483 09:08:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.418 09:08:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61566 00:06:51.418 09:08:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61566 ']' 00:06:51.418 09:08:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61566 00:06:51.418 09:08:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.418 09:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.418 09:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61566 00:06:51.418 09:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.418 09:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.418 killing process with pid 61566 00:06:51.418 09:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61566' 00:06:51.418 09:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61566 00:06:51.418 09:08:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61566 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61582 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61582 ']' 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61582 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61582 00:06:55.610 killing process with pid 61582 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61582' 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61582 00:06:55.610 09:08:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61582 00:06:57.012 00:06:57.012 real 0m9.275s 00:06:57.012 user 0m9.741s 00:06:57.012 sys 0m1.242s 00:06:57.012 09:08:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.012 ************************************ 00:06:57.012 END TEST non_locking_app_on_locked_coremask 00:06:57.012 ************************************ 00:06:57.012 09:08:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.012 09:08:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:57.012 09:08:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.012 09:08:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.012 09:08:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.012 ************************************ 00:06:57.012 START TEST locking_app_on_unlocked_coremask 00:06:57.012 ************************************ 00:06:57.012 09:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:57.012 09:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61709 00:06:57.012 09:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61709 /var/tmp/spdk.sock 00:06:57.012 09:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:57.012 09:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61709 ']' 00:06:57.012 09:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.012 09:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.012 09:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.012 09:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.012 09:08:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.012 [2024-12-13 09:08:50.755547] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:57.012 [2024-12-13 09:08:50.756580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61709 ] 00:06:57.271 [2024-12-13 09:08:50.949893] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.271 [2024-12-13 09:08:50.949945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.271 [2024-12-13 09:08:51.060653] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.530 [2024-12-13 09:08:51.247443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61725 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61725 /var/tmp/spdk2.sock 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61725 ']' 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.097 09:08:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.097 [2024-12-13 09:08:51.849525] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:58.097 [2024-12-13 09:08:51.849707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61725 ] 00:06:58.355 [2024-12-13 09:08:52.033896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.355 [2024-12-13 09:08:52.210481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.923 [2024-12-13 09:08:52.591018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.858 09:08:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.858 09:08:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:59.858 09:08:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61725 00:06:59.858 09:08:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61725 00:06:59.858 09:08:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61709 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61709 ']' 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61709 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61709 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61709' 00:07:00.794 killing process with pid 61709 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61709 00:07:00.794 09:08:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61709 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61725 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61725 ']' 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61725 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61725 00:07:04.981 killing process with pid 61725 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61725' 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61725 00:07:04.981 09:08:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61725 00:07:06.361 00:07:06.361 real 0m9.334s 00:07:06.361 user 0m9.860s 00:07:06.361 sys 0m1.222s 00:07:06.361 09:08:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.361 ************************************ 00:07:06.361 END TEST locking_app_on_unlocked_coremask 00:07:06.361 ************************************ 00:07:06.361 09:08:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.361 09:08:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:06.361 09:08:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.361 09:08:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.361 09:08:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.361 ************************************ 00:07:06.361 START TEST locking_app_on_locked_coremask 00:07:06.361 ************************************ 00:07:06.361 09:09:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:06.361 09:09:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61852 00:07:06.361 09:09:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61852 /var/tmp/spdk.sock 00:07:06.361 09:09:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61852 ']' 00:07:06.361 09:09:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.361 09:09:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.361 09:09:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.361 09:09:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.361 09:09:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.361 09:09:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.361 [2024-12-13 09:09:00.140849] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:06.361 [2024-12-13 09:09:00.141039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61852 ] 00:07:06.620 [2024-12-13 09:09:00.318282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.620 [2024-12-13 09:09:00.408812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.880 [2024-12-13 09:09:00.610231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61874 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61874 /var/tmp/spdk2.sock 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61874 /var/tmp/spdk2.sock 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:07.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61874 /var/tmp/spdk2.sock 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61874 ']' 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.448 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.448 [2024-12-13 09:09:01.218859] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:07.448 [2024-12-13 09:09:01.219309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61874 ] 00:07:07.706 [2024-12-13 09:09:01.414587] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61852 has claimed it. 00:07:07.706 [2024-12-13 09:09:01.414660] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.274 ERROR: process (pid: 61874) is no longer running 00:07:08.274 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61874) - No such process 00:07:08.274 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.274 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:08.274 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:08.274 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.274 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.274 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.274 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61852 00:07:08.274 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61852 00:07:08.274 09:09:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61852 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61852 ']' 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61852 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61852 00:07:08.533 killing process with pid 61852 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61852' 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61852 00:07:08.533 09:09:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61852 00:07:11.071 00:07:11.071 real 0m4.456s 00:07:11.071 user 0m4.883s 00:07:11.071 sys 0m0.792s 00:07:11.071 09:09:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.071 09:09:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.071 ************************************ 00:07:11.071 END TEST locking_app_on_locked_coremask 00:07:11.071 ************************************ 00:07:11.071 09:09:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:11.071 09:09:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.071 09:09:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.071 09:09:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.071 ************************************ 00:07:11.071 START TEST locking_overlapped_coremask 00:07:11.071 ************************************ 00:07:11.071 09:09:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:11.071 09:09:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61942 00:07:11.071 09:09:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61942 /var/tmp/spdk.sock 00:07:11.071 09:09:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61942 ']' 00:07:11.071 09:09:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:11.072 09:09:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.072 09:09:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.072 09:09:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.072 09:09:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.072 09:09:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.072 [2024-12-13 09:09:04.658013] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:11.072 [2024-12-13 09:09:04.658207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61942 ] 00:07:11.072 [2024-12-13 09:09:04.841085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.072 [2024-12-13 09:09:04.949426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.072 [2024-12-13 09:09:04.949522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.072 [2024-12-13 09:09:04.949535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.331 [2024-12-13 09:09:05.186585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61961 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61961 /var/tmp/spdk2.sock 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61961 /var/tmp/spdk2.sock 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61961 /var/tmp/spdk2.sock 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61961 ']' 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.899 09:09:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.158 [2024-12-13 09:09:05.880380] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:12.158 [2024-12-13 09:09:05.880705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61961 ] 00:07:12.417 [2024-12-13 09:09:06.075766] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61942 has claimed it. 00:07:12.417 [2024-12-13 09:09:06.075876] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:12.986 ERROR: process (pid: 61961) is no longer running 00:07:12.986 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61961) - No such process 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61942 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61942 ']' 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61942 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61942 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61942' 00:07:12.986 killing process with pid 61942 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61942 00:07:12.986 09:09:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61942 00:07:15.521 00:07:15.521 real 0m4.355s 00:07:15.521 user 0m11.894s 00:07:15.521 sys 0m0.602s 00:07:15.521 ************************************ 00:07:15.521 END TEST locking_overlapped_coremask 00:07:15.521 ************************************ 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.521 09:09:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:15.521 09:09:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.521 09:09:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.521 09:09:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.521 ************************************ 00:07:15.521 START TEST locking_overlapped_coremask_via_rpc 00:07:15.521 ************************************ 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62025 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62025 /var/tmp/spdk.sock 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62025 ']' 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.521 09:09:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.521 [2024-12-13 09:09:09.066983] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:15.522 [2024-12-13 09:09:09.067469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62025 ] 00:07:15.522 [2024-12-13 09:09:09.252532] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.522 [2024-12-13 09:09:09.252962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.522 [2024-12-13 09:09:09.363850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.522 [2024-12-13 09:09:09.363940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.522 [2024-12-13 09:09:09.363947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.781 [2024-12-13 09:09:09.609602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.349 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.349 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:16.349 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:16.349 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62043 00:07:16.349 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62043 /var/tmp/spdk2.sock 00:07:16.350 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62043 ']' 00:07:16.350 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.350 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.350 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.350 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.350 09:09:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.608 [2024-12-13 09:09:10.304612] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:16.608 [2024-12-13 09:09:10.305391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62043 ] 00:07:16.867 [2024-12-13 09:09:10.505852] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.867 [2024-12-13 09:09:10.505916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.867 [2024-12-13 09:09:10.731242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.867 [2024-12-13 09:09:10.731372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.867 [2024-12-13 09:09:10.731383] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:17.435 [2024-12-13 09:09:11.191157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.340 [2024-12-13 09:09:13.086597] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62025 has claimed it. 00:07:19.340 request: 00:07:19.340 { 00:07:19.340 "method": "framework_enable_cpumask_locks", 00:07:19.340 "req_id": 1 00:07:19.340 } 00:07:19.340 Got JSON-RPC error response 00:07:19.340 response: 00:07:19.340 { 00:07:19.340 "code": -32603, 00:07:19.340 "message": "Failed to claim CPU core: 2" 00:07:19.340 } 00:07:19.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62025 /var/tmp/spdk.sock 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62025 ']' 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.340 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.599 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.599 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:19.599 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62043 /var/tmp/spdk2.sock 00:07:19.599 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62043 ']' 00:07:19.599 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.599 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.599 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.599 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.599 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.857 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.857 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:19.857 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:19.857 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.857 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.857 ************************************ 00:07:19.857 END TEST locking_overlapped_coremask_via_rpc 00:07:19.857 ************************************ 00:07:19.858 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.858 00:07:19.858 real 0m4.747s 00:07:19.858 user 0m1.783s 00:07:19.858 sys 0m0.204s 00:07:19.858 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.858 09:09:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.858 09:09:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:19.858 09:09:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62025 ]] 00:07:19.858 09:09:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62025 00:07:19.858 09:09:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62025 ']' 00:07:19.858 09:09:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62025 00:07:19.858 09:09:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:19.858 09:09:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.858 09:09:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62025 00:07:19.858 killing process with pid 62025 00:07:19.858 09:09:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.858 09:09:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.858 09:09:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62025' 00:07:19.858 09:09:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 62025 00:07:19.858 09:09:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 62025 00:07:22.449 09:09:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62043 ]] 00:07:22.449 09:09:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62043 00:07:22.449 09:09:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62043 ']' 00:07:22.449 09:09:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62043 00:07:22.449 09:09:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:22.449 09:09:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.449 09:09:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62043 00:07:22.449 killing process with pid 62043 00:07:22.449 09:09:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:22.449 09:09:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:22.449 09:09:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62043' 00:07:22.449 09:09:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 62043 00:07:22.449 09:09:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 62043 00:07:23.827 09:09:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.827 Process with pid 62025 is not found 00:07:23.827 Process with pid 62043 is not found 00:07:23.827 09:09:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:23.827 09:09:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62025 ]] 00:07:23.827 09:09:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62025 00:07:23.827 09:09:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62025 ']' 00:07:23.827 09:09:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62025 00:07:23.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (62025) - No such process 00:07:23.827 09:09:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 62025 is not found' 00:07:23.827 09:09:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62043 ]] 00:07:23.827 09:09:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62043 00:07:23.827 09:09:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62043 ']' 00:07:23.827 09:09:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62043 00:07:23.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (62043) - No such process 00:07:23.827 09:09:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 62043 is not found' 00:07:23.827 09:09:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.827 00:07:23.827 real 0m43.138s 00:07:23.827 user 1m18.987s 00:07:23.827 sys 0m6.193s 00:07:23.827 09:09:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.827 09:09:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.827 ************************************ 00:07:23.827 END TEST cpu_locks 00:07:23.827 ************************************ 00:07:24.086 00:07:24.086 real 1m13.037s 00:07:24.086 user 2m19.328s 00:07:24.086 sys 0m9.740s 00:07:24.086 09:09:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.086 09:09:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.086 ************************************ 00:07:24.086 END TEST event 00:07:24.086 ************************************ 00:07:24.086 09:09:17 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:24.086 09:09:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.086 09:09:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.086 09:09:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.086 ************************************ 00:07:24.086 START TEST thread 00:07:24.086 ************************************ 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:24.087 * Looking for test storage... 00:07:24.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:24.087 09:09:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.087 09:09:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.087 09:09:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.087 09:09:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.087 09:09:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.087 09:09:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.087 09:09:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.087 09:09:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.087 09:09:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.087 09:09:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.087 09:09:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.087 09:09:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:24.087 09:09:17 thread -- scripts/common.sh@345 -- # : 1 00:07:24.087 09:09:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.087 09:09:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.087 09:09:17 thread -- scripts/common.sh@365 -- # decimal 1 00:07:24.087 09:09:17 thread -- scripts/common.sh@353 -- # local d=1 00:07:24.087 09:09:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.087 09:09:17 thread -- scripts/common.sh@355 -- # echo 1 00:07:24.087 09:09:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.087 09:09:17 thread -- scripts/common.sh@366 -- # decimal 2 00:07:24.087 09:09:17 thread -- scripts/common.sh@353 -- # local d=2 00:07:24.087 09:09:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.087 09:09:17 thread -- scripts/common.sh@355 -- # echo 2 00:07:24.087 09:09:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.087 09:09:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.087 09:09:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.087 09:09:17 thread -- scripts/common.sh@368 -- # return 0 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:24.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.087 --rc genhtml_branch_coverage=1 00:07:24.087 --rc genhtml_function_coverage=1 00:07:24.087 --rc genhtml_legend=1 00:07:24.087 --rc geninfo_all_blocks=1 00:07:24.087 --rc geninfo_unexecuted_blocks=1 00:07:24.087 00:07:24.087 ' 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:24.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.087 --rc genhtml_branch_coverage=1 00:07:24.087 --rc genhtml_function_coverage=1 00:07:24.087 --rc genhtml_legend=1 00:07:24.087 --rc geninfo_all_blocks=1 00:07:24.087 --rc geninfo_unexecuted_blocks=1 00:07:24.087 00:07:24.087 ' 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:24.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.087 --rc genhtml_branch_coverage=1 00:07:24.087 --rc genhtml_function_coverage=1 00:07:24.087 --rc genhtml_legend=1 00:07:24.087 --rc geninfo_all_blocks=1 00:07:24.087 --rc geninfo_unexecuted_blocks=1 00:07:24.087 00:07:24.087 ' 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:24.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.087 --rc genhtml_branch_coverage=1 00:07:24.087 --rc genhtml_function_coverage=1 00:07:24.087 --rc genhtml_legend=1 00:07:24.087 --rc geninfo_all_blocks=1 00:07:24.087 --rc geninfo_unexecuted_blocks=1 00:07:24.087 00:07:24.087 ' 00:07:24.087 09:09:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.087 09:09:17 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.087 ************************************ 00:07:24.087 START TEST thread_poller_perf 00:07:24.087 ************************************ 00:07:24.087 09:09:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:24.346 [2024-12-13 09:09:18.003639] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:24.346 [2024-12-13 09:09:18.003910] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62235 ] 00:07:24.346 [2024-12-13 09:09:18.180498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.605 [2024-12-13 09:09:18.305758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.605 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:25.986 [2024-12-13T09:09:19.876Z] ====================================== 00:07:25.986 [2024-12-13T09:09:19.876Z] busy:2214817966 (cyc) 00:07:25.986 [2024-12-13T09:09:19.876Z] total_run_count: 354000 00:07:25.986 [2024-12-13T09:09:19.876Z] tsc_hz: 2200000000 (cyc) 00:07:25.986 [2024-12-13T09:09:19.876Z] ====================================== 00:07:25.986 [2024-12-13T09:09:19.876Z] poller_cost: 6256 (cyc), 2843 (nsec) 00:07:25.986 00:07:25.986 real 0m1.538s 00:07:25.986 user 0m1.348s 00:07:25.986 sys 0m0.079s 00:07:25.986 09:09:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.986 09:09:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.986 ************************************ 00:07:25.986 END TEST thread_poller_perf 00:07:25.986 ************************************ 00:07:25.986 09:09:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.986 09:09:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:25.986 09:09:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.986 09:09:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.986 ************************************ 00:07:25.986 START TEST thread_poller_perf 00:07:25.986 ************************************ 00:07:25.986 09:09:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.986 [2024-12-13 09:09:19.595604] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:25.986 [2024-12-13 09:09:19.596122] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62277 ] 00:07:25.986 [2024-12-13 09:09:19.770882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.986 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:25.986 [2024-12-13 09:09:19.853222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.364 [2024-12-13T09:09:21.254Z] ====================================== 00:07:27.364 [2024-12-13T09:09:21.254Z] busy:2203808404 (cyc) 00:07:27.364 [2024-12-13T09:09:21.254Z] total_run_count: 4267000 00:07:27.364 [2024-12-13T09:09:21.254Z] tsc_hz: 2200000000 (cyc) 00:07:27.364 [2024-12-13T09:09:21.254Z] ====================================== 00:07:27.364 [2024-12-13T09:09:21.254Z] poller_cost: 516 (cyc), 234 (nsec) 00:07:27.364 00:07:27.364 real 0m1.482s 00:07:27.364 user 0m1.284s 00:07:27.364 sys 0m0.089s 00:07:27.364 09:09:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.364 09:09:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.364 ************************************ 00:07:27.364 END TEST thread_poller_perf 00:07:27.364 ************************************ 00:07:27.364 09:09:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:27.364 ************************************ 00:07:27.364 END TEST thread 00:07:27.364 ************************************ 00:07:27.364 00:07:27.364 real 0m3.320s 00:07:27.364 user 0m2.782s 00:07:27.364 sys 0m0.311s 00:07:27.364 09:09:21 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.364 09:09:21 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.364 09:09:21 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:27.364 09:09:21 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:27.364 09:09:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.364 09:09:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.364 09:09:21 -- common/autotest_common.sh@10 -- # set +x 00:07:27.364 ************************************ 00:07:27.364 START TEST app_cmdline 00:07:27.364 ************************************ 00:07:27.364 09:09:21 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:27.364 * Looking for test storage... 00:07:27.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:27.364 09:09:21 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:27.364 09:09:21 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:27.364 09:09:21 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:27.623 09:09:21 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.623 09:09:21 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:27.623 09:09:21 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.623 09:09:21 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:27.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.623 --rc genhtml_branch_coverage=1 00:07:27.623 --rc genhtml_function_coverage=1 00:07:27.623 --rc genhtml_legend=1 00:07:27.623 --rc geninfo_all_blocks=1 00:07:27.623 --rc geninfo_unexecuted_blocks=1 00:07:27.623 00:07:27.623 ' 00:07:27.623 09:09:21 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:27.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.623 --rc genhtml_branch_coverage=1 00:07:27.623 --rc genhtml_function_coverage=1 00:07:27.623 --rc genhtml_legend=1 00:07:27.623 --rc geninfo_all_blocks=1 00:07:27.623 --rc geninfo_unexecuted_blocks=1 00:07:27.623 00:07:27.623 ' 00:07:27.623 09:09:21 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:27.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.623 --rc genhtml_branch_coverage=1 00:07:27.623 --rc genhtml_function_coverage=1 00:07:27.623 --rc genhtml_legend=1 00:07:27.623 --rc geninfo_all_blocks=1 00:07:27.623 --rc geninfo_unexecuted_blocks=1 00:07:27.623 00:07:27.623 ' 00:07:27.623 09:09:21 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:27.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.623 --rc genhtml_branch_coverage=1 00:07:27.623 --rc genhtml_function_coverage=1 00:07:27.623 --rc genhtml_legend=1 00:07:27.623 --rc geninfo_all_blocks=1 00:07:27.623 --rc geninfo_unexecuted_blocks=1 00:07:27.623 00:07:27.623 ' 00:07:27.623 09:09:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:27.623 09:09:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62359 00:07:27.623 09:09:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62359 00:07:27.623 09:09:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:27.623 09:09:21 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 62359 ']' 00:07:27.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.623 09:09:21 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.623 09:09:21 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.623 09:09:21 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.624 09:09:21 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.624 09:09:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.624 [2024-12-13 09:09:21.458411] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:27.624 [2024-12-13 09:09:21.459143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62359 ] 00:07:27.883 [2024-12-13 09:09:21.642087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.883 [2024-12-13 09:09:21.736604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.142 [2024-12-13 09:09:21.919260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.710 09:09:22 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.710 09:09:22 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:28.710 09:09:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:28.969 { 00:07:28.969 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:07:28.969 "fields": { 00:07:28.969 "major": 25, 00:07:28.969 "minor": 1, 00:07:28.969 "patch": 0, 00:07:28.969 "suffix": "-pre", 00:07:28.969 "commit": "e01cb43b8" 00:07:28.969 } 00:07:28.969 } 00:07:28.969 09:09:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.969 09:09:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.969 09:09:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.969 09:09:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.969 09:09:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.969 09:09:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.969 09:09:22 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.969 09:09:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.969 09:09:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.969 09:09:22 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.969 09:09:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.969 09:09:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.969 09:09:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.969 09:09:22 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:28.969 09:09:22 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.970 09:09:22 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.970 09:09:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.970 09:09:22 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.970 09:09:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.970 09:09:22 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.970 09:09:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.970 09:09:22 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.970 09:09:22 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:28.970 09:09:22 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.229 request: 00:07:29.229 { 00:07:29.229 "method": "env_dpdk_get_mem_stats", 00:07:29.229 "req_id": 1 00:07:29.229 } 00:07:29.229 Got JSON-RPC error response 00:07:29.229 response: 00:07:29.229 { 00:07:29.229 "code": -32601, 00:07:29.229 "message": "Method not found" 00:07:29.229 } 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.229 09:09:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62359 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 62359 ']' 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 62359 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62359 00:07:29.229 killing process with pid 62359 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62359' 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@973 -- # kill 62359 00:07:29.229 09:09:23 app_cmdline -- common/autotest_common.sh@978 -- # wait 62359 00:07:31.133 00:07:31.133 real 0m3.835s 00:07:31.133 user 0m4.430s 00:07:31.133 sys 0m0.545s 00:07:31.133 ************************************ 00:07:31.133 END TEST app_cmdline 00:07:31.133 09:09:24 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.133 09:09:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.133 ************************************ 00:07:31.133 09:09:25 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:31.133 09:09:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.133 09:09:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.133 09:09:25 -- common/autotest_common.sh@10 -- # set +x 00:07:31.393 ************************************ 00:07:31.393 START TEST version 00:07:31.393 ************************************ 00:07:31.393 09:09:25 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:31.393 * Looking for test storage... 00:07:31.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:31.393 09:09:25 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:31.393 09:09:25 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:31.393 09:09:25 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:31.393 09:09:25 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:31.393 09:09:25 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.393 09:09:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.393 09:09:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.393 09:09:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.393 09:09:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.393 09:09:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.393 09:09:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.393 09:09:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.393 09:09:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.393 09:09:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.393 09:09:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.393 09:09:25 version -- scripts/common.sh@344 -- # case "$op" in 00:07:31.393 09:09:25 version -- scripts/common.sh@345 -- # : 1 00:07:31.393 09:09:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.393 09:09:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.393 09:09:25 version -- scripts/common.sh@365 -- # decimal 1 00:07:31.393 09:09:25 version -- scripts/common.sh@353 -- # local d=1 00:07:31.393 09:09:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.393 09:09:25 version -- scripts/common.sh@355 -- # echo 1 00:07:31.393 09:09:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.393 09:09:25 version -- scripts/common.sh@366 -- # decimal 2 00:07:31.393 09:09:25 version -- scripts/common.sh@353 -- # local d=2 00:07:31.393 09:09:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.393 09:09:25 version -- scripts/common.sh@355 -- # echo 2 00:07:31.393 09:09:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.393 09:09:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.393 09:09:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.393 09:09:25 version -- scripts/common.sh@368 -- # return 0 00:07:31.394 09:09:25 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.394 09:09:25 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:31.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.394 --rc genhtml_branch_coverage=1 00:07:31.394 --rc genhtml_function_coverage=1 00:07:31.394 --rc genhtml_legend=1 00:07:31.394 --rc geninfo_all_blocks=1 00:07:31.394 --rc geninfo_unexecuted_blocks=1 00:07:31.394 00:07:31.394 ' 00:07:31.394 09:09:25 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:31.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.394 --rc genhtml_branch_coverage=1 00:07:31.394 --rc genhtml_function_coverage=1 00:07:31.394 --rc genhtml_legend=1 00:07:31.394 --rc geninfo_all_blocks=1 00:07:31.394 --rc geninfo_unexecuted_blocks=1 00:07:31.394 00:07:31.394 ' 00:07:31.394 09:09:25 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:31.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.394 --rc genhtml_branch_coverage=1 00:07:31.394 --rc genhtml_function_coverage=1 00:07:31.394 --rc genhtml_legend=1 00:07:31.394 --rc geninfo_all_blocks=1 00:07:31.394 --rc geninfo_unexecuted_blocks=1 00:07:31.394 00:07:31.394 ' 00:07:31.394 09:09:25 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:31.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.394 --rc genhtml_branch_coverage=1 00:07:31.394 --rc genhtml_function_coverage=1 00:07:31.394 --rc genhtml_legend=1 00:07:31.394 --rc geninfo_all_blocks=1 00:07:31.394 --rc geninfo_unexecuted_blocks=1 00:07:31.394 00:07:31.394 ' 00:07:31.394 09:09:25 version -- app/version.sh@17 -- # get_header_version major 00:07:31.394 09:09:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:31.394 09:09:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.394 09:09:25 version -- app/version.sh@14 -- # cut -f2 00:07:31.394 09:09:25 version -- app/version.sh@17 -- # major=25 00:07:31.394 09:09:25 version -- app/version.sh@18 -- # get_header_version minor 00:07:31.394 09:09:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:31.394 09:09:25 version -- app/version.sh@14 -- # cut -f2 00:07:31.394 09:09:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.394 09:09:25 version -- app/version.sh@18 -- # minor=1 00:07:31.394 09:09:25 version -- app/version.sh@19 -- # get_header_version patch 00:07:31.394 09:09:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:31.394 09:09:25 version -- app/version.sh@14 -- # cut -f2 00:07:31.394 09:09:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.394 09:09:25 version -- app/version.sh@19 -- # patch=0 00:07:31.394 09:09:25 version -- app/version.sh@20 -- # get_header_version suffix 00:07:31.394 09:09:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:31.394 09:09:25 version -- app/version.sh@14 -- # cut -f2 00:07:31.394 09:09:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.394 09:09:25 version -- app/version.sh@20 -- # suffix=-pre 00:07:31.394 09:09:25 version -- app/version.sh@22 -- # version=25.1 00:07:31.394 09:09:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:31.394 09:09:25 version -- app/version.sh@28 -- # version=25.1rc0 00:07:31.394 09:09:25 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:31.394 09:09:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:31.394 09:09:25 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:31.682 09:09:25 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:31.682 00:07:31.682 real 0m0.257s 00:07:31.682 user 0m0.169s 00:07:31.682 sys 0m0.125s 00:07:31.682 09:09:25 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.683 ************************************ 00:07:31.683 END TEST version 00:07:31.683 ************************************ 00:07:31.683 09:09:25 version -- common/autotest_common.sh@10 -- # set +x 00:07:31.683 09:09:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:31.683 09:09:25 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:31.683 09:09:25 -- spdk/autotest.sh@194 -- # uname -s 00:07:31.683 09:09:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:31.683 09:09:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:31.683 09:09:25 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:31.683 09:09:25 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:31.683 09:09:25 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:31.683 09:09:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.683 09:09:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.683 09:09:25 -- common/autotest_common.sh@10 -- # set +x 00:07:31.683 ************************************ 00:07:31.683 START TEST spdk_dd 00:07:31.683 ************************************ 00:07:31.683 09:09:25 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:31.683 * Looking for test storage... 00:07:31.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:31.683 09:09:25 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:31.683 09:09:25 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:07:31.683 09:09:25 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:31.683 09:09:25 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:31.683 09:09:25 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.683 09:09:25 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.683 --rc genhtml_branch_coverage=1 00:07:31.683 --rc genhtml_function_coverage=1 00:07:31.683 --rc genhtml_legend=1 00:07:31.683 --rc geninfo_all_blocks=1 00:07:31.683 --rc geninfo_unexecuted_blocks=1 00:07:31.683 00:07:31.683 ' 00:07:31.683 09:09:25 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.683 --rc genhtml_branch_coverage=1 00:07:31.683 --rc genhtml_function_coverage=1 00:07:31.683 --rc genhtml_legend=1 00:07:31.683 --rc geninfo_all_blocks=1 00:07:31.683 --rc geninfo_unexecuted_blocks=1 00:07:31.683 00:07:31.683 ' 00:07:31.683 09:09:25 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.683 --rc genhtml_branch_coverage=1 00:07:31.683 --rc genhtml_function_coverage=1 00:07:31.683 --rc genhtml_legend=1 00:07:31.683 --rc geninfo_all_blocks=1 00:07:31.683 --rc geninfo_unexecuted_blocks=1 00:07:31.683 00:07:31.683 ' 00:07:31.683 09:09:25 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:31.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.683 --rc genhtml_branch_coverage=1 00:07:31.683 --rc genhtml_function_coverage=1 00:07:31.683 --rc genhtml_legend=1 00:07:31.683 --rc geninfo_all_blocks=1 00:07:31.683 --rc geninfo_unexecuted_blocks=1 00:07:31.683 00:07:31.683 ' 00:07:31.683 09:09:25 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.683 09:09:25 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.683 09:09:25 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.683 09:09:25 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.683 09:09:25 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.683 09:09:25 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:31.683 09:09:25 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.683 09:09:25 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:32.253 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:32.253 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:32.253 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:32.253 09:09:25 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:32.253 09:09:25 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:32.253 09:09:25 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:32.254 09:09:25 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:32.254 09:09:25 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:32.254 09:09:25 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.254 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:32.255 * spdk_dd linked to liburing 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:32.255 09:09:26 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:32.255 09:09:26 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:32.256 09:09:26 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:32.256 09:09:26 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:32.256 09:09:26 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:32.256 09:09:26 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.256 09:09:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:32.256 ************************************ 00:07:32.256 START TEST spdk_dd_basic_rw 00:07:32.256 ************************************ 00:07:32.256 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:32.256 * Looking for test storage... 00:07:32.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:32.256 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:32.256 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:07:32.256 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:32.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.515 --rc genhtml_branch_coverage=1 00:07:32.515 --rc genhtml_function_coverage=1 00:07:32.515 --rc genhtml_legend=1 00:07:32.515 --rc geninfo_all_blocks=1 00:07:32.515 --rc geninfo_unexecuted_blocks=1 00:07:32.515 00:07:32.515 ' 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:32.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.515 --rc genhtml_branch_coverage=1 00:07:32.515 --rc genhtml_function_coverage=1 00:07:32.515 --rc genhtml_legend=1 00:07:32.515 --rc geninfo_all_blocks=1 00:07:32.515 --rc geninfo_unexecuted_blocks=1 00:07:32.515 00:07:32.515 ' 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:32.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.515 --rc genhtml_branch_coverage=1 00:07:32.515 --rc genhtml_function_coverage=1 00:07:32.515 --rc genhtml_legend=1 00:07:32.515 --rc geninfo_all_blocks=1 00:07:32.515 --rc geninfo_unexecuted_blocks=1 00:07:32.515 00:07:32.515 ' 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:32.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.515 --rc genhtml_branch_coverage=1 00:07:32.515 --rc genhtml_function_coverage=1 00:07:32.515 --rc genhtml_legend=1 00:07:32.515 --rc geninfo_all_blocks=1 00:07:32.515 --rc geninfo_unexecuted_blocks=1 00:07:32.515 00:07:32.515 ' 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.515 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:32.516 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:32.777 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:32.777 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.778 ************************************ 00:07:32.778 START TEST dd_bs_lt_native_bs 00:07:32.778 ************************************ 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:32.778 09:09:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:32.778 { 00:07:32.778 "subsystems": [ 00:07:32.778 { 00:07:32.778 "subsystem": "bdev", 00:07:32.778 "config": [ 00:07:32.778 { 00:07:32.778 "params": { 00:07:32.778 "trtype": "pcie", 00:07:32.778 "traddr": "0000:00:10.0", 00:07:32.778 "name": "Nvme0" 00:07:32.778 }, 00:07:32.779 "method": "bdev_nvme_attach_controller" 00:07:32.779 }, 00:07:32.779 { 00:07:32.779 "method": "bdev_wait_for_examine" 00:07:32.779 } 00:07:32.779 ] 00:07:32.779 } 00:07:32.779 ] 00:07:32.779 } 00:07:33.038 [2024-12-13 09:09:26.664454] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:33.038 [2024-12-13 09:09:26.664623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62725 ] 00:07:33.038 [2024-12-13 09:09:26.851306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.297 [2024-12-13 09:09:26.978074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.297 [2024-12-13 09:09:27.179566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.556 [2024-12-13 09:09:27.341541] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:33.556 [2024-12-13 09:09:27.341648] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.125 [2024-12-13 09:09:27.786257] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:34.125 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:34.125 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.125 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:34.125 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:34.125 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:34.125 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.125 00:07:34.125 real 0m1.466s 00:07:34.125 user 0m1.197s 00:07:34.125 sys 0m0.226s 00:07:34.125 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.384 ************************************ 00:07:34.384 END TEST dd_bs_lt_native_bs 00:07:34.384 ************************************ 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.384 ************************************ 00:07:34.384 START TEST dd_rw 00:07:34.384 ************************************ 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:34.384 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.951 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:34.951 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:34.951 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.952 09:09:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.952 { 00:07:34.952 "subsystems": [ 00:07:34.952 { 00:07:34.952 "subsystem": "bdev", 00:07:34.952 "config": [ 00:07:34.952 { 00:07:34.952 "params": { 00:07:34.952 "trtype": "pcie", 00:07:34.952 "traddr": "0000:00:10.0", 00:07:34.952 "name": "Nvme0" 00:07:34.952 }, 00:07:34.952 "method": "bdev_nvme_attach_controller" 00:07:34.952 }, 00:07:34.952 { 00:07:34.952 "method": "bdev_wait_for_examine" 00:07:34.952 } 00:07:34.952 ] 00:07:34.952 } 00:07:34.952 ] 00:07:34.952 } 00:07:34.952 [2024-12-13 09:09:28.642514] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:34.952 [2024-12-13 09:09:28.642711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62768 ] 00:07:34.952 [2024-12-13 09:09:28.821393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.211 [2024-12-13 09:09:28.911995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.211 [2024-12-13 09:09:29.058993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.470  [2024-12-13T09:09:30.297Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:36.407 00:07:36.407 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:36.407 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:36.407 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.407 09:09:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.407 { 00:07:36.407 "subsystems": [ 00:07:36.407 { 00:07:36.407 "subsystem": "bdev", 00:07:36.407 "config": [ 00:07:36.407 { 00:07:36.407 "params": { 00:07:36.407 "trtype": "pcie", 00:07:36.407 "traddr": "0000:00:10.0", 00:07:36.407 "name": "Nvme0" 00:07:36.407 }, 00:07:36.407 "method": "bdev_nvme_attach_controller" 00:07:36.407 }, 00:07:36.407 { 00:07:36.407 "method": "bdev_wait_for_examine" 00:07:36.407 } 00:07:36.407 ] 00:07:36.407 } 00:07:36.407 ] 00:07:36.407 } 00:07:36.407 [2024-12-13 09:09:30.187750] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:36.407 [2024-12-13 09:09:30.187934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62799 ] 00:07:36.666 [2024-12-13 09:09:30.364909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.666 [2024-12-13 09:09:30.446408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.925 [2024-12-13 09:09:30.593830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.925  [2024-12-13T09:09:31.751Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:37.861 00:07:37.861 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.861 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:37.861 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:37.861 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:37.861 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:37.861 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:37.861 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:37.861 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:37.862 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:37.862 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.862 09:09:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.862 { 00:07:37.862 "subsystems": [ 00:07:37.862 { 00:07:37.862 "subsystem": "bdev", 00:07:37.862 "config": [ 00:07:37.862 { 00:07:37.862 "params": { 00:07:37.862 "trtype": "pcie", 00:07:37.862 "traddr": "0000:00:10.0", 00:07:37.862 "name": "Nvme0" 00:07:37.862 }, 00:07:37.862 "method": "bdev_nvme_attach_controller" 00:07:37.862 }, 00:07:37.862 { 00:07:37.862 "method": "bdev_wait_for_examine" 00:07:37.862 } 00:07:37.862 ] 00:07:37.862 } 00:07:37.862 ] 00:07:37.862 } 00:07:37.862 [2024-12-13 09:09:31.541326] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:37.862 [2024-12-13 09:09:31.541504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62821 ] 00:07:37.862 [2024-12-13 09:09:31.705582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.121 [2024-12-13 09:09:31.796020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.121 [2024-12-13 09:09:31.964877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.380  [2024-12-13T09:09:33.206Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:39.316 00:07:39.316 09:09:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:39.316 09:09:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:39.316 09:09:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:39.316 09:09:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:39.316 09:09:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:39.316 09:09:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:39.316 09:09:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.884 09:09:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:39.884 09:09:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:39.884 09:09:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.884 09:09:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.884 { 00:07:39.884 "subsystems": [ 00:07:39.884 { 00:07:39.884 "subsystem": "bdev", 00:07:39.884 "config": [ 00:07:39.884 { 00:07:39.884 "params": { 00:07:39.884 "trtype": "pcie", 00:07:39.884 "traddr": "0000:00:10.0", 00:07:39.884 "name": "Nvme0" 00:07:39.884 }, 00:07:39.884 "method": "bdev_nvme_attach_controller" 00:07:39.884 }, 00:07:39.884 { 00:07:39.884 "method": "bdev_wait_for_examine" 00:07:39.884 } 00:07:39.884 ] 00:07:39.884 } 00:07:39.884 ] 00:07:39.884 } 00:07:39.884 [2024-12-13 09:09:33.566056] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:39.884 [2024-12-13 09:09:33.566211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62852 ] 00:07:39.884 [2024-12-13 09:09:33.729499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.143 [2024-12-13 09:09:33.818369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.143 [2024-12-13 09:09:33.980919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.402  [2024-12-13T09:09:34.860Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:40.970 00:07:40.970 09:09:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:40.970 09:09:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:40.970 09:09:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.970 09:09:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.229 { 00:07:41.229 "subsystems": [ 00:07:41.229 { 00:07:41.229 "subsystem": "bdev", 00:07:41.229 "config": [ 00:07:41.229 { 00:07:41.229 "params": { 00:07:41.229 "trtype": "pcie", 00:07:41.229 "traddr": "0000:00:10.0", 00:07:41.229 "name": "Nvme0" 00:07:41.229 }, 00:07:41.229 "method": "bdev_nvme_attach_controller" 00:07:41.229 }, 00:07:41.229 { 00:07:41.229 "method": "bdev_wait_for_examine" 00:07:41.229 } 00:07:41.229 ] 00:07:41.229 } 00:07:41.229 ] 00:07:41.229 } 00:07:41.229 [2024-12-13 09:09:34.950885] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:41.229 [2024-12-13 09:09:34.951084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62877 ] 00:07:41.489 [2024-12-13 09:09:35.118591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.489 [2024-12-13 09:09:35.201161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.489 [2024-12-13 09:09:35.346925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.748  [2024-12-13T09:09:36.575Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:42.685 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.685 09:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.685 { 00:07:42.685 "subsystems": [ 00:07:42.685 { 00:07:42.685 "subsystem": "bdev", 00:07:42.685 "config": [ 00:07:42.685 { 00:07:42.685 "params": { 00:07:42.685 "trtype": "pcie", 00:07:42.685 "traddr": "0000:00:10.0", 00:07:42.685 "name": "Nvme0" 00:07:42.685 }, 00:07:42.685 "method": "bdev_nvme_attach_controller" 00:07:42.685 }, 00:07:42.685 { 00:07:42.685 "method": "bdev_wait_for_examine" 00:07:42.685 } 00:07:42.685 ] 00:07:42.685 } 00:07:42.685 ] 00:07:42.685 } 00:07:42.685 [2024-12-13 09:09:36.488417] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:42.685 [2024-12-13 09:09:36.488616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62905 ] 00:07:42.944 [2024-12-13 09:09:36.665521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.944 [2024-12-13 09:09:36.748736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.202 [2024-12-13 09:09:36.901240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.202  [2024-12-13T09:09:38.030Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:44.140 00:07:44.140 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:44.140 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:44.140 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:44.140 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:44.140 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:44.140 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:44.140 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:44.140 09:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.399 09:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:44.399 09:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:44.399 09:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:44.399 09:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.399 { 00:07:44.399 "subsystems": [ 00:07:44.399 { 00:07:44.399 "subsystem": "bdev", 00:07:44.399 "config": [ 00:07:44.399 { 00:07:44.399 "params": { 00:07:44.399 "trtype": "pcie", 00:07:44.399 "traddr": "0000:00:10.0", 00:07:44.399 "name": "Nvme0" 00:07:44.399 }, 00:07:44.399 "method": "bdev_nvme_attach_controller" 00:07:44.399 }, 00:07:44.399 { 00:07:44.399 "method": "bdev_wait_for_examine" 00:07:44.399 } 00:07:44.399 ] 00:07:44.399 } 00:07:44.399 ] 00:07:44.399 } 00:07:44.658 [2024-12-13 09:09:38.326940] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:44.658 [2024-12-13 09:09:38.327109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62936 ] 00:07:44.658 [2024-12-13 09:09:38.505175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.917 [2024-12-13 09:09:38.588088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.917 [2024-12-13 09:09:38.732431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.176  [2024-12-13T09:09:40.003Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:46.113 00:07:46.113 09:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:46.113 09:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:46.113 09:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.113 09:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.113 { 00:07:46.113 "subsystems": [ 00:07:46.113 { 00:07:46.113 "subsystem": "bdev", 00:07:46.113 "config": [ 00:07:46.113 { 00:07:46.113 "params": { 00:07:46.113 "trtype": "pcie", 00:07:46.113 "traddr": "0000:00:10.0", 00:07:46.113 "name": "Nvme0" 00:07:46.113 }, 00:07:46.113 "method": "bdev_nvme_attach_controller" 00:07:46.113 }, 00:07:46.113 { 00:07:46.113 "method": "bdev_wait_for_examine" 00:07:46.113 } 00:07:46.113 ] 00:07:46.113 } 00:07:46.113 ] 00:07:46.113 } 00:07:46.113 [2024-12-13 09:09:39.862061] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:46.113 [2024-12-13 09:09:39.862230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62956 ] 00:07:46.371 [2024-12-13 09:09:40.038278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.371 [2024-12-13 09:09:40.126600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.630 [2024-12-13 09:09:40.273266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.630  [2024-12-13T09:09:41.455Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:47.565 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.565 09:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.565 { 00:07:47.565 "subsystems": [ 00:07:47.565 { 00:07:47.565 "subsystem": "bdev", 00:07:47.565 "config": [ 00:07:47.565 { 00:07:47.565 "params": { 00:07:47.565 "trtype": "pcie", 00:07:47.565 "traddr": "0000:00:10.0", 00:07:47.565 "name": "Nvme0" 00:07:47.565 }, 00:07:47.565 "method": "bdev_nvme_attach_controller" 00:07:47.565 }, 00:07:47.565 { 00:07:47.565 "method": "bdev_wait_for_examine" 00:07:47.565 } 00:07:47.565 ] 00:07:47.565 } 00:07:47.565 ] 00:07:47.565 } 00:07:47.565 [2024-12-13 09:09:41.237441] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:47.565 [2024-12-13 09:09:41.237633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62984 ] 00:07:47.565 [2024-12-13 09:09:41.414832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.824 [2024-12-13 09:09:41.506493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.824 [2024-12-13 09:09:41.652700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.083  [2024-12-13T09:09:42.909Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:49.019 00:07:49.019 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:49.019 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:49.019 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:49.019 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:49.019 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:49.019 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:49.019 09:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.278 09:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:49.278 09:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:49.278 09:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.278 09:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.537 { 00:07:49.537 "subsystems": [ 00:07:49.537 { 00:07:49.537 "subsystem": "bdev", 00:07:49.537 "config": [ 00:07:49.537 { 00:07:49.537 "params": { 00:07:49.537 "trtype": "pcie", 00:07:49.537 "traddr": "0000:00:10.0", 00:07:49.537 "name": "Nvme0" 00:07:49.537 }, 00:07:49.537 "method": "bdev_nvme_attach_controller" 00:07:49.537 }, 00:07:49.537 { 00:07:49.537 "method": "bdev_wait_for_examine" 00:07:49.537 } 00:07:49.537 ] 00:07:49.537 } 00:07:49.537 ] 00:07:49.537 } 00:07:49.537 [2024-12-13 09:09:43.247336] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:49.537 [2024-12-13 09:09:43.247529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63015 ] 00:07:49.796 [2024-12-13 09:09:43.427013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.796 [2024-12-13 09:09:43.524372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.796 [2024-12-13 09:09:43.682547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.055  [2024-12-13T09:09:44.883Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:50.993 00:07:50.993 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:50.993 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:50.993 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:50.993 09:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.993 { 00:07:50.993 "subsystems": [ 00:07:50.993 { 00:07:50.993 "subsystem": "bdev", 00:07:50.993 "config": [ 00:07:50.993 { 00:07:50.993 "params": { 00:07:50.993 "trtype": "pcie", 00:07:50.993 "traddr": "0000:00:10.0", 00:07:50.993 "name": "Nvme0" 00:07:50.993 }, 00:07:50.993 "method": "bdev_nvme_attach_controller" 00:07:50.993 }, 00:07:50.993 { 00:07:50.993 "method": "bdev_wait_for_examine" 00:07:50.993 } 00:07:50.993 ] 00:07:50.993 } 00:07:50.993 ] 00:07:50.993 } 00:07:50.993 [2024-12-13 09:09:44.649672] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:50.993 [2024-12-13 09:09:44.649860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63040 ] 00:07:50.993 [2024-12-13 09:09:44.818358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.252 [2024-12-13 09:09:44.908312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.252 [2024-12-13 09:09:45.061756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.510  [2024-12-13T09:09:46.337Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:52.447 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:52.447 09:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.447 { 00:07:52.447 "subsystems": [ 00:07:52.447 { 00:07:52.447 "subsystem": "bdev", 00:07:52.447 "config": [ 00:07:52.447 { 00:07:52.447 "params": { 00:07:52.447 "trtype": "pcie", 00:07:52.447 "traddr": "0000:00:10.0", 00:07:52.447 "name": "Nvme0" 00:07:52.447 }, 00:07:52.447 "method": "bdev_nvme_attach_controller" 00:07:52.447 }, 00:07:52.447 { 00:07:52.447 "method": "bdev_wait_for_examine" 00:07:52.447 } 00:07:52.447 ] 00:07:52.447 } 00:07:52.447 ] 00:07:52.447 } 00:07:52.447 [2024-12-13 09:09:46.196472] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:52.447 [2024-12-13 09:09:46.196658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63062 ] 00:07:52.706 [2024-12-13 09:09:46.372809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.706 [2024-12-13 09:09:46.455806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.965 [2024-12-13 09:09:46.605988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.965  [2024-12-13T09:09:47.792Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:53.902 00:07:53.902 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:53.902 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:53.902 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:53.902 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:53.902 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:53.902 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:53.902 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:53.902 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.161 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:54.161 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:54.161 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:54.161 09:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.161 { 00:07:54.161 "subsystems": [ 00:07:54.161 { 00:07:54.161 "subsystem": "bdev", 00:07:54.161 "config": [ 00:07:54.161 { 00:07:54.161 "params": { 00:07:54.161 "trtype": "pcie", 00:07:54.161 "traddr": "0000:00:10.0", 00:07:54.161 "name": "Nvme0" 00:07:54.161 }, 00:07:54.161 "method": "bdev_nvme_attach_controller" 00:07:54.161 }, 00:07:54.161 { 00:07:54.161 "method": "bdev_wait_for_examine" 00:07:54.161 } 00:07:54.161 ] 00:07:54.161 } 00:07:54.161 ] 00:07:54.161 } 00:07:54.161 [2024-12-13 09:09:48.038104] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:54.161 [2024-12-13 09:09:48.038244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63093 ] 00:07:54.420 [2024-12-13 09:09:48.201255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.420 [2024-12-13 09:09:48.290777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.680 [2024-12-13 09:09:48.467484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.939  [2024-12-13T09:09:49.791Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:55.901 00:07:55.901 09:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:55.901 09:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:55.901 09:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:55.901 09:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:55.901 { 00:07:55.901 "subsystems": [ 00:07:55.901 { 00:07:55.901 "subsystem": "bdev", 00:07:55.901 "config": [ 00:07:55.901 { 00:07:55.901 "params": { 00:07:55.901 "trtype": "pcie", 00:07:55.901 "traddr": "0000:00:10.0", 00:07:55.901 "name": "Nvme0" 00:07:55.901 }, 00:07:55.901 "method": "bdev_nvme_attach_controller" 00:07:55.901 }, 00:07:55.901 { 00:07:55.901 "method": "bdev_wait_for_examine" 00:07:55.901 } 00:07:55.901 ] 00:07:55.901 } 00:07:55.901 ] 00:07:55.901 } 00:07:55.901 [2024-12-13 09:09:49.638087] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:55.901 [2024-12-13 09:09:49.638263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63119 ] 00:07:56.160 [2024-12-13 09:09:49.813949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.160 [2024-12-13 09:09:49.907609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.419 [2024-12-13 09:09:50.074048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.419  [2024-12-13T09:09:51.246Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:57.356 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:57.356 09:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.356 { 00:07:57.356 "subsystems": [ 00:07:57.356 { 00:07:57.356 "subsystem": "bdev", 00:07:57.356 "config": [ 00:07:57.356 { 00:07:57.356 "params": { 00:07:57.356 "trtype": "pcie", 00:07:57.356 "traddr": "0000:00:10.0", 00:07:57.356 "name": "Nvme0" 00:07:57.356 }, 00:07:57.356 "method": "bdev_nvme_attach_controller" 00:07:57.356 }, 00:07:57.356 { 00:07:57.356 "method": "bdev_wait_for_examine" 00:07:57.356 } 00:07:57.356 ] 00:07:57.356 } 00:07:57.356 ] 00:07:57.356 } 00:07:57.356 [2024-12-13 09:09:51.072697] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:57.356 [2024-12-13 09:09:51.072967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63146 ] 00:07:57.615 [2024-12-13 09:09:51.250934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.615 [2024-12-13 09:09:51.337434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.615 [2024-12-13 09:09:51.498716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.874  [2024-12-13T09:09:52.701Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:58.811 00:07:58.811 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:58.811 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:58.811 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:58.811 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:58.811 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:58.811 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:58.811 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.379 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:59.379 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:59.379 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:59.379 09:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.379 { 00:07:59.379 "subsystems": [ 00:07:59.379 { 00:07:59.379 "subsystem": "bdev", 00:07:59.379 "config": [ 00:07:59.379 { 00:07:59.379 "params": { 00:07:59.379 "trtype": "pcie", 00:07:59.379 "traddr": "0000:00:10.0", 00:07:59.379 "name": "Nvme0" 00:07:59.379 }, 00:07:59.379 "method": "bdev_nvme_attach_controller" 00:07:59.379 }, 00:07:59.379 { 00:07:59.379 "method": "bdev_wait_for_examine" 00:07:59.379 } 00:07:59.379 ] 00:07:59.379 } 00:07:59.379 ] 00:07:59.379 } 00:07:59.379 [2024-12-13 09:09:53.085614] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:59.379 [2024-12-13 09:09:53.085777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63177 ] 00:07:59.379 [2024-12-13 09:09:53.261860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.638 [2024-12-13 09:09:53.355744] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.897 [2024-12-13 09:09:53.536449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.897  [2024-12-13T09:09:54.725Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:00.835 00:08:00.835 09:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:00.835 09:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:00.835 09:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:00.835 09:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.835 { 00:08:00.835 "subsystems": [ 00:08:00.835 { 00:08:00.835 "subsystem": "bdev", 00:08:00.835 "config": [ 00:08:00.835 { 00:08:00.835 "params": { 00:08:00.835 "trtype": "pcie", 00:08:00.835 "traddr": "0000:00:10.0", 00:08:00.835 "name": "Nvme0" 00:08:00.835 }, 00:08:00.835 "method": "bdev_nvme_attach_controller" 00:08:00.835 }, 00:08:00.835 { 00:08:00.835 "method": "bdev_wait_for_examine" 00:08:00.835 } 00:08:00.835 ] 00:08:00.835 } 00:08:00.835 ] 00:08:00.835 } 00:08:00.835 [2024-12-13 09:09:54.540232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:00.835 [2024-12-13 09:09:54.540451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63203 ] 00:08:00.835 [2024-12-13 09:09:54.721300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.095 [2024-12-13 09:09:54.810022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.095 [2024-12-13 09:09:54.963139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.353  [2024-12-13T09:09:56.179Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:02.289 00:08:02.289 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.289 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:02.289 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:02.289 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:02.289 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:02.289 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:02.289 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:02.289 09:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:02.289 09:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:02.289 09:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:02.289 09:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.289 { 00:08:02.289 "subsystems": [ 00:08:02.289 { 00:08:02.289 "subsystem": "bdev", 00:08:02.289 "config": [ 00:08:02.289 { 00:08:02.289 "params": { 00:08:02.289 "trtype": "pcie", 00:08:02.289 "traddr": "0000:00:10.0", 00:08:02.289 "name": "Nvme0" 00:08:02.289 }, 00:08:02.289 "method": "bdev_nvme_attach_controller" 00:08:02.289 }, 00:08:02.289 { 00:08:02.289 "method": "bdev_wait_for_examine" 00:08:02.289 } 00:08:02.289 ] 00:08:02.289 } 00:08:02.289 ] 00:08:02.289 } 00:08:02.289 [2024-12-13 09:09:56.108598] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:02.289 [2024-12-13 09:09:56.109017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63225 ] 00:08:02.547 [2024-12-13 09:09:56.290375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.547 [2024-12-13 09:09:56.387661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.806 [2024-12-13 09:09:56.546876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.066  [2024-12-13T09:09:57.525Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:03.635 00:08:03.635 ************************************ 00:08:03.635 END TEST dd_rw 00:08:03.635 ************************************ 00:08:03.635 00:08:03.635 real 0m29.344s 00:08:03.635 user 0m24.647s 00:08:03.635 sys 0m13.848s 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.635 ************************************ 00:08:03.635 START TEST dd_rw_offset 00:08:03.635 ************************************ 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=1vpfq14tu5359sg7px7jb0a0kwurumvp4166ststekoexy2ng4ff2pu00nopg6nkd17k5vrkihdxwmp2agphewbhvgd94e6f5w5wt08auiestfo2wokzkyg5svquuaakpy5u9y1gwpwxfmyytqugdr8ozcsj6emkkpw6m8p980hgcyo67tkeci3kth92tdbfov3t8gnqi2k4ciewb5ajwcgi1b53smpjc55s6v9v7ern0iiasxex0kpjak4www8nszv8are5qisv287q2azw24f2yirchnpicoo6w50sfiaqrqsnwfuu7e9irs66ityyze70le59xjb09sy2440m8arvmium1k2dd2cx7ggp2id7fm4m49j38cfbum1sjj8scqbx1ydxfhbbu264b6mco11fgv3vgmxdl8vqkhdls3wuks2tg9g5oqfoaipfe7ybzudr7dvjuwi8hphv15eygx1lyty227l6nu8lckrehu9olar1bg9rlgn4f5kqrc89bphk23vmh50kpytcvz09rsa8ja8jw5x1i0pw1a8mg09gqt0utxueh65l5j3hzqhko9ksofqg47p3933dp71z7rxrjdbzm0lf4dlxbiwdof0zr4hbkugi5upx2xjhq3gtyvq50ao3qwy05qx3ct6ebi09ylrpd1ehdt29ncja0qizpz74seirpr680ewwolfuydu818a0nej33a2y8rtoaiw22422u5dh5aufxw1isbjmnw21obj10rupy0v6o1ks1ibqdffxjao4mwarzoyk47pgw07sv75z0a86f3gdmw8likxi81kybycfyjrsli3lsuig0vbxu37loytf88mke5mp1xjqvtj6ufww74cnkkifvckcl4khiq5hvvcslr75yitwjwhsjpyatx5zxniupoe5q9styhpomuvfjzconcvob3y2tswvs1ogd8aie0otupzswn07yiq1o05f4xv0qj477eops5mi19ibnwy7vb3de9w7j1bjk03j6ygbmb32b95y3n06fr6o4ufz3mibcnrwkmdz45s0r7cgj7dvud65izv3anvog5mgiz7xwucs8s541c32hgwygfmd85lap8c4dobj37ambg9a7rmjiht3krvxk6zvhgfucnj66vnmv4c5db5hulktafrux7xjl17u6t92cmr31lyk5szxw8xcib56fsyp97p7kl0v56m1nsmyh17pdsuh5y1tz7lidm6o04krd2nfl9q1mtzy468el6ci2o96zilfxs47iolzwg3pkxwndif32a3it0n1la5j9ndambmp6h6qi6q47fho6c65ok07yzphyksq03bgm15sye1nmf3om8wgk9v8r1rci8r0ct8abixadlgymecoqr270yat04eb8ioc1fskj4k4m1k2aofio6kddqslvb7soeh55lq9j1pr8gv9l2ayz0gcwvdvffyvu1ej0jmml4na68qtbz34i35mte9txyg1kph5ud3yjyv0fkvrnxodpkhz4t53p7fno7226v3hg63v3g0p9g4ufno6pspgvj4hehjiwtp27okgqxabkh9r5zi1bgnam6pj7rrjirbqhl8ski0ldravyjqpu5cdlw09f3o9iijtexx1s1tvrr0o685io00ic5iik9ndfk9gfv6kydebxkkps0aga5suv03uq9tskzue9r83qam9gfn96t6bhetxwjhxeq74hb5aehym4ig5oc0daavhwks6g848ewl53rafibylclr88cxw71k03d87aa4lr4udetetdczijfitid34h5o42wdcdczwljkenoxufzmv7swz1ywme0873nyyb2gkswd34h30qc7jz6gho5dzboapvmtu3dquz9s5a7sy43nedxc1zl1hnqf8e5c4rmhxntlru1y70icw6zwnoueyxwcrj3u10tatt4vbo43omxu6mfwb47xh3lo5jkc3svokk9o4o21qnbvolvqlnhp8d8d99rdkrml0bn3y2niypget8vt1brss1i9im5qxqqzufal2bdjny6g9x8qvs68qr3ngpaj48kshho0a164j6ikxrxv02qptd7hyylns5xz526pjqsjgxo766a42zsixcdn1vgpwba2ukghfplzwutf9wi96qtd43b0290f7aw4sc70fkiji1g3424ymwjc3ziu6lhfzjl5n1prrcfm2kzud1e5748yevz8n8klie7sneog88hwerbjcw7amk83mfqi3j3xkedrptq3g3i93am303bwnwj5o79cpldamankvg2y7c6p4esc6ldpwtal0yqwv21tp2ir0zkbuecrpclyi1k5174bhs2faq8153eqn78gac9bzsa57a7x08o50p2qja83hwd4ajjsar1jnm9m8wl1dskr1ovsjty4n7ju71pys9igbtdcsgaknk1dejrey5vmowjnyza7tedzsmguazu4l1ckltqup63j1j4u48f5altpw8jz792rdelw18jfozxlkl61gklxeryy4cxjvbgtd89sh27haic4jqt1i2zj1ofba1izqtgv5s5qvd443u5qz1lvavt7bxgvz044u2jd8fdf66jrk6z1f2vo3uvln9uen1qyzqafruoiulazmljbevhsfat0sxevbgg0ygd0xd0a1qahgpst3jag7tsjqicxcv2s6tp1rzzyh83t7paodlam956rg6g3j4g9m6y4vftczlklagfpk9bv5rld3iw0dxblghcuixfo9229nzxhb7ckjyxgg7gql185bye8pi3ws5voya6o1wz5gytulj0c3dlyafd5mbl1gxcjcwmz8zmbzt64mszdlkcd1n5qdv6u1gx73udhsw1nlrjlxm63ohovati8r1e0yyfs7daqfuuj5s7b98onij93vj0wj301lk2png1f1hux8av0pko3dgaovpcynx3taj0npa5s9axp253wu8mtoj8x6d6eywhbziqru3lkniuiqsfe302gs67qr0r57bidn1idotqxephtvoh6cf1eyfxb3ln1n3xz90n5anm22u4b9ilpd5l6yoe89bmmxwwcmuvpib5t706m9iwt1551k42ubrnhkks7k19hg5ktcwx4t5h7gnmcwlpg5f89wjd2kcd3gduzcwaghwm9ee16fzvt0ywonl7yksege1p0j7e84hndvca1hx2enwkj74pdfncz84w70dsnm9nl0ja96xcli8ko23cpgyirrol44r9g0ymhfxg0ezjami7x4exgl9xmwv2s9ng135y6k9nioyyyxghjy7ced34cvwc63jjbvrys9yu2vcy3kj8zvt66zy9qqiikdex9dh8iy4jbg5mcz0qw6gbycu7b0ldym4za522ipo9yq2vcaskkxee2hdg72vom7j0nox2g1td6fmxaipmusa7ip22fszk20d6itn78i7dbtx9n1wr3mkis3pihrvq60khb6xnsqxeil9h0r0ylifxm3worjmdhdh2mivebqx2yfpr8lwt17ihzny4czdat5bbg1hh62gbqtuebqmc175q0zkdw63h48r1jk8p1fduhnx8da0cslw3obzpu5j9wp4p05jjzlht46zb3xrbwlf4veyb0oft5kjppbl91ckoc8u5uwitbrahyz5kef2l67v0850fl1qfa5ku0op9f6miqni8k386hzc8nbfk3o11jt2mzlq4wqer5bxja9i6a6nl3sdsnks366gopsw3z3lpan2k43154ltphjs1f0s6pe2xmx9v01phfvrc3oiozds4y6e0kleu7z0orykodpqxg372w9d2vlcbmnq2nz6pn4tjkkxr6uz9n9ve8l6wtn30tupxuq5zf8ik4qrtf8xarzuwal8otohbgfmsq5kg6gtuc2e4ath0ry8hbg5em0ytufek031dendhz3vx0qjy49xb5ssjb8krreq0ycqbdgpdfu0igfwkpgezghb7re11rdf4rq0slnjmle9894u3znnh6ezkhbv6nuimz6ggewxajeixmn2oqzx2y53r762vsaaqwd5cy4623bpyxtrw1dsoadtyuecduw4b60dtxt 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:03.635 09:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:03.895 { 00:08:03.895 "subsystems": [ 00:08:03.895 { 00:08:03.895 "subsystem": "bdev", 00:08:03.895 "config": [ 00:08:03.895 { 00:08:03.895 "params": { 00:08:03.895 "trtype": "pcie", 00:08:03.895 "traddr": "0000:00:10.0", 00:08:03.895 "name": "Nvme0" 00:08:03.895 }, 00:08:03.895 "method": "bdev_nvme_attach_controller" 00:08:03.895 }, 00:08:03.895 { 00:08:03.895 "method": "bdev_wait_for_examine" 00:08:03.895 } 00:08:03.895 ] 00:08:03.895 } 00:08:03.895 ] 00:08:03.895 } 00:08:03.895 [2024-12-13 09:09:57.607415] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:03.895 [2024-12-13 09:09:57.607536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63273 ] 00:08:03.895 [2024-12-13 09:09:57.772904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.153 [2024-12-13 09:09:57.861264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.153 [2024-12-13 09:09:58.020937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.412  [2024-12-13T09:09:59.275Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:05.385 00:08:05.385 09:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:05.385 09:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:05.385 09:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:05.385 09:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:05.385 { 00:08:05.385 "subsystems": [ 00:08:05.385 { 00:08:05.385 "subsystem": "bdev", 00:08:05.385 "config": [ 00:08:05.385 { 00:08:05.385 "params": { 00:08:05.385 "trtype": "pcie", 00:08:05.385 "traddr": "0000:00:10.0", 00:08:05.385 "name": "Nvme0" 00:08:05.385 }, 00:08:05.385 "method": "bdev_nvme_attach_controller" 00:08:05.385 }, 00:08:05.385 { 00:08:05.385 "method": "bdev_wait_for_examine" 00:08:05.385 } 00:08:05.385 ] 00:08:05.385 } 00:08:05.385 ] 00:08:05.385 } 00:08:05.385 [2024-12-13 09:09:59.191077] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:05.385 [2024-12-13 09:09:59.191201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63293 ] 00:08:05.643 [2024-12-13 09:09:59.358334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.643 [2024-12-13 09:09:59.453730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.902 [2024-12-13 09:09:59.618843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.161  [2024-12-13T09:10:00.619Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:06.729 00:08:06.729 09:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:06.730 09:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 1vpfq14tu5359sg7px7jb0a0kwurumvp4166ststekoexy2ng4ff2pu00nopg6nkd17k5vrkihdxwmp2agphewbhvgd94e6f5w5wt08auiestfo2wokzkyg5svquuaakpy5u9y1gwpwxfmyytqugdr8ozcsj6emkkpw6m8p980hgcyo67tkeci3kth92tdbfov3t8gnqi2k4ciewb5ajwcgi1b53smpjc55s6v9v7ern0iiasxex0kpjak4www8nszv8are5qisv287q2azw24f2yirchnpicoo6w50sfiaqrqsnwfuu7e9irs66ityyze70le59xjb09sy2440m8arvmium1k2dd2cx7ggp2id7fm4m49j38cfbum1sjj8scqbx1ydxfhbbu264b6mco11fgv3vgmxdl8vqkhdls3wuks2tg9g5oqfoaipfe7ybzudr7dvjuwi8hphv15eygx1lyty227l6nu8lckrehu9olar1bg9rlgn4f5kqrc89bphk23vmh50kpytcvz09rsa8ja8jw5x1i0pw1a8mg09gqt0utxueh65l5j3hzqhko9ksofqg47p3933dp71z7rxrjdbzm0lf4dlxbiwdof0zr4hbkugi5upx2xjhq3gtyvq50ao3qwy05qx3ct6ebi09ylrpd1ehdt29ncja0qizpz74seirpr680ewwolfuydu818a0nej33a2y8rtoaiw22422u5dh5aufxw1isbjmnw21obj10rupy0v6o1ks1ibqdffxjao4mwarzoyk47pgw07sv75z0a86f3gdmw8likxi81kybycfyjrsli3lsuig0vbxu37loytf88mke5mp1xjqvtj6ufww74cnkkifvckcl4khiq5hvvcslr75yitwjwhsjpyatx5zxniupoe5q9styhpomuvfjzconcvob3y2tswvs1ogd8aie0otupzswn07yiq1o05f4xv0qj477eops5mi19ibnwy7vb3de9w7j1bjk03j6ygbmb32b95y3n06fr6o4ufz3mibcnrwkmdz45s0r7cgj7dvud65izv3anvog5mgiz7xwucs8s541c32hgwygfmd85lap8c4dobj37ambg9a7rmjiht3krvxk6zvhgfucnj66vnmv4c5db5hulktafrux7xjl17u6t92cmr31lyk5szxw8xcib56fsyp97p7kl0v56m1nsmyh17pdsuh5y1tz7lidm6o04krd2nfl9q1mtzy468el6ci2o96zilfxs47iolzwg3pkxwndif32a3it0n1la5j9ndambmp6h6qi6q47fho6c65ok07yzphyksq03bgm15sye1nmf3om8wgk9v8r1rci8r0ct8abixadlgymecoqr270yat04eb8ioc1fskj4k4m1k2aofio6kddqslvb7soeh55lq9j1pr8gv9l2ayz0gcwvdvffyvu1ej0jmml4na68qtbz34i35mte9txyg1kph5ud3yjyv0fkvrnxodpkhz4t53p7fno7226v3hg63v3g0p9g4ufno6pspgvj4hehjiwtp27okgqxabkh9r5zi1bgnam6pj7rrjirbqhl8ski0ldravyjqpu5cdlw09f3o9iijtexx1s1tvrr0o685io00ic5iik9ndfk9gfv6kydebxkkps0aga5suv03uq9tskzue9r83qam9gfn96t6bhetxwjhxeq74hb5aehym4ig5oc0daavhwks6g848ewl53rafibylclr88cxw71k03d87aa4lr4udetetdczijfitid34h5o42wdcdczwljkenoxufzmv7swz1ywme0873nyyb2gkswd34h30qc7jz6gho5dzboapvmtu3dquz9s5a7sy43nedxc1zl1hnqf8e5c4rmhxntlru1y70icw6zwnoueyxwcrj3u10tatt4vbo43omxu6mfwb47xh3lo5jkc3svokk9o4o21qnbvolvqlnhp8d8d99rdkrml0bn3y2niypget8vt1brss1i9im5qxqqzufal2bdjny6g9x8qvs68qr3ngpaj48kshho0a164j6ikxrxv02qptd7hyylns5xz526pjqsjgxo766a42zsixcdn1vgpwba2ukghfplzwutf9wi96qtd43b0290f7aw4sc70fkiji1g3424ymwjc3ziu6lhfzjl5n1prrcfm2kzud1e5748yevz8n8klie7sneog88hwerbjcw7amk83mfqi3j3xkedrptq3g3i93am303bwnwj5o79cpldamankvg2y7c6p4esc6ldpwtal0yqwv21tp2ir0zkbuecrpclyi1k5174bhs2faq8153eqn78gac9bzsa57a7x08o50p2qja83hwd4ajjsar1jnm9m8wl1dskr1ovsjty4n7ju71pys9igbtdcsgaknk1dejrey5vmowjnyza7tedzsmguazu4l1ckltqup63j1j4u48f5altpw8jz792rdelw18jfozxlkl61gklxeryy4cxjvbgtd89sh27haic4jqt1i2zj1ofba1izqtgv5s5qvd443u5qz1lvavt7bxgvz044u2jd8fdf66jrk6z1f2vo3uvln9uen1qyzqafruoiulazmljbevhsfat0sxevbgg0ygd0xd0a1qahgpst3jag7tsjqicxcv2s6tp1rzzyh83t7paodlam956rg6g3j4g9m6y4vftczlklagfpk9bv5rld3iw0dxblghcuixfo9229nzxhb7ckjyxgg7gql185bye8pi3ws5voya6o1wz5gytulj0c3dlyafd5mbl1gxcjcwmz8zmbzt64mszdlkcd1n5qdv6u1gx73udhsw1nlrjlxm63ohovati8r1e0yyfs7daqfuuj5s7b98onij93vj0wj301lk2png1f1hux8av0pko3dgaovpcynx3taj0npa5s9axp253wu8mtoj8x6d6eywhbziqru3lkniuiqsfe302gs67qr0r57bidn1idotqxephtvoh6cf1eyfxb3ln1n3xz90n5anm22u4b9ilpd5l6yoe89bmmxwwcmuvpib5t706m9iwt1551k42ubrnhkks7k19hg5ktcwx4t5h7gnmcwlpg5f89wjd2kcd3gduzcwaghwm9ee16fzvt0ywonl7yksege1p0j7e84hndvca1hx2enwkj74pdfncz84w70dsnm9nl0ja96xcli8ko23cpgyirrol44r9g0ymhfxg0ezjami7x4exgl9xmwv2s9ng135y6k9nioyyyxghjy7ced34cvwc63jjbvrys9yu2vcy3kj8zvt66zy9qqiikdex9dh8iy4jbg5mcz0qw6gbycu7b0ldym4za522ipo9yq2vcaskkxee2hdg72vom7j0nox2g1td6fmxaipmusa7ip22fszk20d6itn78i7dbtx9n1wr3mkis3pihrvq60khb6xnsqxeil9h0r0ylifxm3worjmdhdh2mivebqx2yfpr8lwt17ihzny4czdat5bbg1hh62gbqtuebqmc175q0zkdw63h48r1jk8p1fduhnx8da0cslw3obzpu5j9wp4p05jjzlht46zb3xrbwlf4veyb0oft5kjppbl91ckoc8u5uwitbrahyz5kef2l67v0850fl1qfa5ku0op9f6miqni8k386hzc8nbfk3o11jt2mzlq4wqer5bxja9i6a6nl3sdsnks366gopsw3z3lpan2k43154ltphjs1f0s6pe2xmx9v01phfvrc3oiozds4y6e0kleu7z0orykodpqxg372w9d2vlcbmnq2nz6pn4tjkkxr6uz9n9ve8l6wtn30tupxuq5zf8ik4qrtf8xarzuwal8otohbgfmsq5kg6gtuc2e4ath0ry8hbg5em0ytufek031dendhz3vx0qjy49xb5ssjb8krreq0ycqbdgpdfu0igfwkpgezghb7re11rdf4rq0slnjmle9894u3znnh6ezkhbv6nuimz6ggewxajeixmn2oqzx2y53r762vsaaqwd5cy4623bpyxtrw1dsoadtyuecduw4b60dtxt == \1\v\p\f\q\1\4\t\u\5\3\5\9\s\g\7\p\x\7\j\b\0\a\0\k\w\u\r\u\m\v\p\4\1\6\6\s\t\s\t\e\k\o\e\x\y\2\n\g\4\f\f\2\p\u\0\0\n\o\p\g\6\n\k\d\1\7\k\5\v\r\k\i\h\d\x\w\m\p\2\a\g\p\h\e\w\b\h\v\g\d\9\4\e\6\f\5\w\5\w\t\0\8\a\u\i\e\s\t\f\o\2\w\o\k\z\k\y\g\5\s\v\q\u\u\a\a\k\p\y\5\u\9\y\1\g\w\p\w\x\f\m\y\y\t\q\u\g\d\r\8\o\z\c\s\j\6\e\m\k\k\p\w\6\m\8\p\9\8\0\h\g\c\y\o\6\7\t\k\e\c\i\3\k\t\h\9\2\t\d\b\f\o\v\3\t\8\g\n\q\i\2\k\4\c\i\e\w\b\5\a\j\w\c\g\i\1\b\5\3\s\m\p\j\c\5\5\s\6\v\9\v\7\e\r\n\0\i\i\a\s\x\e\x\0\k\p\j\a\k\4\w\w\w\8\n\s\z\v\8\a\r\e\5\q\i\s\v\2\8\7\q\2\a\z\w\2\4\f\2\y\i\r\c\h\n\p\i\c\o\o\6\w\5\0\s\f\i\a\q\r\q\s\n\w\f\u\u\7\e\9\i\r\s\6\6\i\t\y\y\z\e\7\0\l\e\5\9\x\j\b\0\9\s\y\2\4\4\0\m\8\a\r\v\m\i\u\m\1\k\2\d\d\2\c\x\7\g\g\p\2\i\d\7\f\m\4\m\4\9\j\3\8\c\f\b\u\m\1\s\j\j\8\s\c\q\b\x\1\y\d\x\f\h\b\b\u\2\6\4\b\6\m\c\o\1\1\f\g\v\3\v\g\m\x\d\l\8\v\q\k\h\d\l\s\3\w\u\k\s\2\t\g\9\g\5\o\q\f\o\a\i\p\f\e\7\y\b\z\u\d\r\7\d\v\j\u\w\i\8\h\p\h\v\1\5\e\y\g\x\1\l\y\t\y\2\2\7\l\6\n\u\8\l\c\k\r\e\h\u\9\o\l\a\r\1\b\g\9\r\l\g\n\4\f\5\k\q\r\c\8\9\b\p\h\k\2\3\v\m\h\5\0\k\p\y\t\c\v\z\0\9\r\s\a\8\j\a\8\j\w\5\x\1\i\0\p\w\1\a\8\m\g\0\9\g\q\t\0\u\t\x\u\e\h\6\5\l\5\j\3\h\z\q\h\k\o\9\k\s\o\f\q\g\4\7\p\3\9\3\3\d\p\7\1\z\7\r\x\r\j\d\b\z\m\0\l\f\4\d\l\x\b\i\w\d\o\f\0\z\r\4\h\b\k\u\g\i\5\u\p\x\2\x\j\h\q\3\g\t\y\v\q\5\0\a\o\3\q\w\y\0\5\q\x\3\c\t\6\e\b\i\0\9\y\l\r\p\d\1\e\h\d\t\2\9\n\c\j\a\0\q\i\z\p\z\7\4\s\e\i\r\p\r\6\8\0\e\w\w\o\l\f\u\y\d\u\8\1\8\a\0\n\e\j\3\3\a\2\y\8\r\t\o\a\i\w\2\2\4\2\2\u\5\d\h\5\a\u\f\x\w\1\i\s\b\j\m\n\w\2\1\o\b\j\1\0\r\u\p\y\0\v\6\o\1\k\s\1\i\b\q\d\f\f\x\j\a\o\4\m\w\a\r\z\o\y\k\4\7\p\g\w\0\7\s\v\7\5\z\0\a\8\6\f\3\g\d\m\w\8\l\i\k\x\i\8\1\k\y\b\y\c\f\y\j\r\s\l\i\3\l\s\u\i\g\0\v\b\x\u\3\7\l\o\y\t\f\8\8\m\k\e\5\m\p\1\x\j\q\v\t\j\6\u\f\w\w\7\4\c\n\k\k\i\f\v\c\k\c\l\4\k\h\i\q\5\h\v\v\c\s\l\r\7\5\y\i\t\w\j\w\h\s\j\p\y\a\t\x\5\z\x\n\i\u\p\o\e\5\q\9\s\t\y\h\p\o\m\u\v\f\j\z\c\o\n\c\v\o\b\3\y\2\t\s\w\v\s\1\o\g\d\8\a\i\e\0\o\t\u\p\z\s\w\n\0\7\y\i\q\1\o\0\5\f\4\x\v\0\q\j\4\7\7\e\o\p\s\5\m\i\1\9\i\b\n\w\y\7\v\b\3\d\e\9\w\7\j\1\b\j\k\0\3\j\6\y\g\b\m\b\3\2\b\9\5\y\3\n\0\6\f\r\6\o\4\u\f\z\3\m\i\b\c\n\r\w\k\m\d\z\4\5\s\0\r\7\c\g\j\7\d\v\u\d\6\5\i\z\v\3\a\n\v\o\g\5\m\g\i\z\7\x\w\u\c\s\8\s\5\4\1\c\3\2\h\g\w\y\g\f\m\d\8\5\l\a\p\8\c\4\d\o\b\j\3\7\a\m\b\g\9\a\7\r\m\j\i\h\t\3\k\r\v\x\k\6\z\v\h\g\f\u\c\n\j\6\6\v\n\m\v\4\c\5\d\b\5\h\u\l\k\t\a\f\r\u\x\7\x\j\l\1\7\u\6\t\9\2\c\m\r\3\1\l\y\k\5\s\z\x\w\8\x\c\i\b\5\6\f\s\y\p\9\7\p\7\k\l\0\v\5\6\m\1\n\s\m\y\h\1\7\p\d\s\u\h\5\y\1\t\z\7\l\i\d\m\6\o\0\4\k\r\d\2\n\f\l\9\q\1\m\t\z\y\4\6\8\e\l\6\c\i\2\o\9\6\z\i\l\f\x\s\4\7\i\o\l\z\w\g\3\p\k\x\w\n\d\i\f\3\2\a\3\i\t\0\n\1\l\a\5\j\9\n\d\a\m\b\m\p\6\h\6\q\i\6\q\4\7\f\h\o\6\c\6\5\o\k\0\7\y\z\p\h\y\k\s\q\0\3\b\g\m\1\5\s\y\e\1\n\m\f\3\o\m\8\w\g\k\9\v\8\r\1\r\c\i\8\r\0\c\t\8\a\b\i\x\a\d\l\g\y\m\e\c\o\q\r\2\7\0\y\a\t\0\4\e\b\8\i\o\c\1\f\s\k\j\4\k\4\m\1\k\2\a\o\f\i\o\6\k\d\d\q\s\l\v\b\7\s\o\e\h\5\5\l\q\9\j\1\p\r\8\g\v\9\l\2\a\y\z\0\g\c\w\v\d\v\f\f\y\v\u\1\e\j\0\j\m\m\l\4\n\a\6\8\q\t\b\z\3\4\i\3\5\m\t\e\9\t\x\y\g\1\k\p\h\5\u\d\3\y\j\y\v\0\f\k\v\r\n\x\o\d\p\k\h\z\4\t\5\3\p\7\f\n\o\7\2\2\6\v\3\h\g\6\3\v\3\g\0\p\9\g\4\u\f\n\o\6\p\s\p\g\v\j\4\h\e\h\j\i\w\t\p\2\7\o\k\g\q\x\a\b\k\h\9\r\5\z\i\1\b\g\n\a\m\6\p\j\7\r\r\j\i\r\b\q\h\l\8\s\k\i\0\l\d\r\a\v\y\j\q\p\u\5\c\d\l\w\0\9\f\3\o\9\i\i\j\t\e\x\x\1\s\1\t\v\r\r\0\o\6\8\5\i\o\0\0\i\c\5\i\i\k\9\n\d\f\k\9\g\f\v\6\k\y\d\e\b\x\k\k\p\s\0\a\g\a\5\s\u\v\0\3\u\q\9\t\s\k\z\u\e\9\r\8\3\q\a\m\9\g\f\n\9\6\t\6\b\h\e\t\x\w\j\h\x\e\q\7\4\h\b\5\a\e\h\y\m\4\i\g\5\o\c\0\d\a\a\v\h\w\k\s\6\g\8\4\8\e\w\l\5\3\r\a\f\i\b\y\l\c\l\r\8\8\c\x\w\7\1\k\0\3\d\8\7\a\a\4\l\r\4\u\d\e\t\e\t\d\c\z\i\j\f\i\t\i\d\3\4\h\5\o\4\2\w\d\c\d\c\z\w\l\j\k\e\n\o\x\u\f\z\m\v\7\s\w\z\1\y\w\m\e\0\8\7\3\n\y\y\b\2\g\k\s\w\d\3\4\h\3\0\q\c\7\j\z\6\g\h\o\5\d\z\b\o\a\p\v\m\t\u\3\d\q\u\z\9\s\5\a\7\s\y\4\3\n\e\d\x\c\1\z\l\1\h\n\q\f\8\e\5\c\4\r\m\h\x\n\t\l\r\u\1\y\7\0\i\c\w\6\z\w\n\o\u\e\y\x\w\c\r\j\3\u\1\0\t\a\t\t\4\v\b\o\4\3\o\m\x\u\6\m\f\w\b\4\7\x\h\3\l\o\5\j\k\c\3\s\v\o\k\k\9\o\4\o\2\1\q\n\b\v\o\l\v\q\l\n\h\p\8\d\8\d\9\9\r\d\k\r\m\l\0\b\n\3\y\2\n\i\y\p\g\e\t\8\v\t\1\b\r\s\s\1\i\9\i\m\5\q\x\q\q\z\u\f\a\l\2\b\d\j\n\y\6\g\9\x\8\q\v\s\6\8\q\r\3\n\g\p\a\j\4\8\k\s\h\h\o\0\a\1\6\4\j\6\i\k\x\r\x\v\0\2\q\p\t\d\7\h\y\y\l\n\s\5\x\z\5\2\6\p\j\q\s\j\g\x\o\7\6\6\a\4\2\z\s\i\x\c\d\n\1\v\g\p\w\b\a\2\u\k\g\h\f\p\l\z\w\u\t\f\9\w\i\9\6\q\t\d\4\3\b\0\2\9\0\f\7\a\w\4\s\c\7\0\f\k\i\j\i\1\g\3\4\2\4\y\m\w\j\c\3\z\i\u\6\l\h\f\z\j\l\5\n\1\p\r\r\c\f\m\2\k\z\u\d\1\e\5\7\4\8\y\e\v\z\8\n\8\k\l\i\e\7\s\n\e\o\g\8\8\h\w\e\r\b\j\c\w\7\a\m\k\8\3\m\f\q\i\3\j\3\x\k\e\d\r\p\t\q\3\g\3\i\9\3\a\m\3\0\3\b\w\n\w\j\5\o\7\9\c\p\l\d\a\m\a\n\k\v\g\2\y\7\c\6\p\4\e\s\c\6\l\d\p\w\t\a\l\0\y\q\w\v\2\1\t\p\2\i\r\0\z\k\b\u\e\c\r\p\c\l\y\i\1\k\5\1\7\4\b\h\s\2\f\a\q\8\1\5\3\e\q\n\7\8\g\a\c\9\b\z\s\a\5\7\a\7\x\0\8\o\5\0\p\2\q\j\a\8\3\h\w\d\4\a\j\j\s\a\r\1\j\n\m\9\m\8\w\l\1\d\s\k\r\1\o\v\s\j\t\y\4\n\7\j\u\7\1\p\y\s\9\i\g\b\t\d\c\s\g\a\k\n\k\1\d\e\j\r\e\y\5\v\m\o\w\j\n\y\z\a\7\t\e\d\z\s\m\g\u\a\z\u\4\l\1\c\k\l\t\q\u\p\6\3\j\1\j\4\u\4\8\f\5\a\l\t\p\w\8\j\z\7\9\2\r\d\e\l\w\1\8\j\f\o\z\x\l\k\l\6\1\g\k\l\x\e\r\y\y\4\c\x\j\v\b\g\t\d\8\9\s\h\2\7\h\a\i\c\4\j\q\t\1\i\2\z\j\1\o\f\b\a\1\i\z\q\t\g\v\5\s\5\q\v\d\4\4\3\u\5\q\z\1\l\v\a\v\t\7\b\x\g\v\z\0\4\4\u\2\j\d\8\f\d\f\6\6\j\r\k\6\z\1\f\2\v\o\3\u\v\l\n\9\u\e\n\1\q\y\z\q\a\f\r\u\o\i\u\l\a\z\m\l\j\b\e\v\h\s\f\a\t\0\s\x\e\v\b\g\g\0\y\g\d\0\x\d\0\a\1\q\a\h\g\p\s\t\3\j\a\g\7\t\s\j\q\i\c\x\c\v\2\s\6\t\p\1\r\z\z\y\h\8\3\t\7\p\a\o\d\l\a\m\9\5\6\r\g\6\g\3\j\4\g\9\m\6\y\4\v\f\t\c\z\l\k\l\a\g\f\p\k\9\b\v\5\r\l\d\3\i\w\0\d\x\b\l\g\h\c\u\i\x\f\o\9\2\2\9\n\z\x\h\b\7\c\k\j\y\x\g\g\7\g\q\l\1\8\5\b\y\e\8\p\i\3\w\s\5\v\o\y\a\6\o\1\w\z\5\g\y\t\u\l\j\0\c\3\d\l\y\a\f\d\5\m\b\l\1\g\x\c\j\c\w\m\z\8\z\m\b\z\t\6\4\m\s\z\d\l\k\c\d\1\n\5\q\d\v\6\u\1\g\x\7\3\u\d\h\s\w\1\n\l\r\j\l\x\m\6\3\o\h\o\v\a\t\i\8\r\1\e\0\y\y\f\s\7\d\a\q\f\u\u\j\5\s\7\b\9\8\o\n\i\j\9\3\v\j\0\w\j\3\0\1\l\k\2\p\n\g\1\f\1\h\u\x\8\a\v\0\p\k\o\3\d\g\a\o\v\p\c\y\n\x\3\t\a\j\0\n\p\a\5\s\9\a\x\p\2\5\3\w\u\8\m\t\o\j\8\x\6\d\6\e\y\w\h\b\z\i\q\r\u\3\l\k\n\i\u\i\q\s\f\e\3\0\2\g\s\6\7\q\r\0\r\5\7\b\i\d\n\1\i\d\o\t\q\x\e\p\h\t\v\o\h\6\c\f\1\e\y\f\x\b\3\l\n\1\n\3\x\z\9\0\n\5\a\n\m\2\2\u\4\b\9\i\l\p\d\5\l\6\y\o\e\8\9\b\m\m\x\w\w\c\m\u\v\p\i\b\5\t\7\0\6\m\9\i\w\t\1\5\5\1\k\4\2\u\b\r\n\h\k\k\s\7\k\1\9\h\g\5\k\t\c\w\x\4\t\5\h\7\g\n\m\c\w\l\p\g\5\f\8\9\w\j\d\2\k\c\d\3\g\d\u\z\c\w\a\g\h\w\m\9\e\e\1\6\f\z\v\t\0\y\w\o\n\l\7\y\k\s\e\g\e\1\p\0\j\7\e\8\4\h\n\d\v\c\a\1\h\x\2\e\n\w\k\j\7\4\p\d\f\n\c\z\8\4\w\7\0\d\s\n\m\9\n\l\0\j\a\9\6\x\c\l\i\8\k\o\2\3\c\p\g\y\i\r\r\o\l\4\4\r\9\g\0\y\m\h\f\x\g\0\e\z\j\a\m\i\7\x\4\e\x\g\l\9\x\m\w\v\2\s\9\n\g\1\3\5\y\6\k\9\n\i\o\y\y\y\x\g\h\j\y\7\c\e\d\3\4\c\v\w\c\6\3\j\j\b\v\r\y\s\9\y\u\2\v\c\y\3\k\j\8\z\v\t\6\6\z\y\9\q\q\i\i\k\d\e\x\9\d\h\8\i\y\4\j\b\g\5\m\c\z\0\q\w\6\g\b\y\c\u\7\b\0\l\d\y\m\4\z\a\5\2\2\i\p\o\9\y\q\2\v\c\a\s\k\k\x\e\e\2\h\d\g\7\2\v\o\m\7\j\0\n\o\x\2\g\1\t\d\6\f\m\x\a\i\p\m\u\s\a\7\i\p\2\2\f\s\z\k\2\0\d\6\i\t\n\7\8\i\7\d\b\t\x\9\n\1\w\r\3\m\k\i\s\3\p\i\h\r\v\q\6\0\k\h\b\6\x\n\s\q\x\e\i\l\9\h\0\r\0\y\l\i\f\x\m\3\w\o\r\j\m\d\h\d\h\2\m\i\v\e\b\q\x\2\y\f\p\r\8\l\w\t\1\7\i\h\z\n\y\4\c\z\d\a\t\5\b\b\g\1\h\h\6\2\g\b\q\t\u\e\b\q\m\c\1\7\5\q\0\z\k\d\w\6\3\h\4\8\r\1\j\k\8\p\1\f\d\u\h\n\x\8\d\a\0\c\s\l\w\3\o\b\z\p\u\5\j\9\w\p\4\p\0\5\j\j\z\l\h\t\4\6\z\b\3\x\r\b\w\l\f\4\v\e\y\b\0\o\f\t\5\k\j\p\p\b\l\9\1\c\k\o\c\8\u\5\u\w\i\t\b\r\a\h\y\z\5\k\e\f\2\l\6\7\v\0\8\5\0\f\l\1\q\f\a\5\k\u\0\o\p\9\f\6\m\i\q\n\i\8\k\3\8\6\h\z\c\8\n\b\f\k\3\o\1\1\j\t\2\m\z\l\q\4\w\q\e\r\5\b\x\j\a\9\i\6\a\6\n\l\3\s\d\s\n\k\s\3\6\6\g\o\p\s\w\3\z\3\l\p\a\n\2\k\4\3\1\5\4\l\t\p\h\j\s\1\f\0\s\6\p\e\2\x\m\x\9\v\0\1\p\h\f\v\r\c\3\o\i\o\z\d\s\4\y\6\e\0\k\l\e\u\7\z\0\o\r\y\k\o\d\p\q\x\g\3\7\2\w\9\d\2\v\l\c\b\m\n\q\2\n\z\6\p\n\4\t\j\k\k\x\r\6\u\z\9\n\9\v\e\8\l\6\w\t\n\3\0\t\u\p\x\u\q\5\z\f\8\i\k\4\q\r\t\f\8\x\a\r\z\u\w\a\l\8\o\t\o\h\b\g\f\m\s\q\5\k\g\6\g\t\u\c\2\e\4\a\t\h\0\r\y\8\h\b\g\5\e\m\0\y\t\u\f\e\k\0\3\1\d\e\n\d\h\z\3\v\x\0\q\j\y\4\9\x\b\5\s\s\j\b\8\k\r\r\e\q\0\y\c\q\b\d\g\p\d\f\u\0\i\g\f\w\k\p\g\e\z\g\h\b\7\r\e\1\1\r\d\f\4\r\q\0\s\l\n\j\m\l\e\9\8\9\4\u\3\z\n\n\h\6\e\z\k\h\b\v\6\n\u\i\m\z\6\g\g\e\w\x\a\j\e\i\x\m\n\2\o\q\z\x\2\y\5\3\r\7\6\2\v\s\a\a\q\w\d\5\c\y\4\6\2\3\b\p\y\x\t\r\w\1\d\s\o\a\d\t\y\u\e\c\d\u\w\4\b\6\0\d\t\x\t ]] 00:08:06.730 00:08:06.730 real 0m3.116s 00:08:06.730 user 0m2.643s 00:08:06.730 sys 0m1.610s 00:08:06.730 09:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.730 09:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:06.730 ************************************ 00:08:06.730 END TEST dd_rw_offset 00:08:06.730 ************************************ 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.989 09:10:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.989 { 00:08:06.989 "subsystems": [ 00:08:06.989 { 00:08:06.989 "subsystem": "bdev", 00:08:06.989 "config": [ 00:08:06.989 { 00:08:06.989 "params": { 00:08:06.989 "trtype": "pcie", 00:08:06.989 "traddr": "0000:00:10.0", 00:08:06.989 "name": "Nvme0" 00:08:06.989 }, 00:08:06.989 "method": "bdev_nvme_attach_controller" 00:08:06.989 }, 00:08:06.989 { 00:08:06.989 "method": "bdev_wait_for_examine" 00:08:06.989 } 00:08:06.989 ] 00:08:06.989 } 00:08:06.989 ] 00:08:06.989 } 00:08:06.989 [2024-12-13 09:10:00.733917] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:06.989 [2024-12-13 09:10:00.734342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63340 ] 00:08:07.248 [2024-12-13 09:10:00.914851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.248 [2024-12-13 09:10:01.003669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.507 [2024-12-13 09:10:01.151374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.507  [2024-12-13T09:10:02.332Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:08.442 00:08:08.442 09:10:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.442 ************************************ 00:08:08.442 END TEST spdk_dd_basic_rw 00:08:08.442 ************************************ 00:08:08.442 00:08:08.442 real 0m36.199s 00:08:08.442 user 0m30.125s 00:08:08.442 sys 0m16.867s 00:08:08.442 09:10:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.442 09:10:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.442 09:10:02 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:08.442 09:10:02 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.442 09:10:02 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.442 09:10:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:08.442 ************************************ 00:08:08.442 START TEST spdk_dd_posix 00:08:08.442 ************************************ 00:08:08.443 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:08.701 * Looking for test storage... 00:08:08.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:08.701 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:08.701 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:08:08.701 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:08.701 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:08.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.702 --rc genhtml_branch_coverage=1 00:08:08.702 --rc genhtml_function_coverage=1 00:08:08.702 --rc genhtml_legend=1 00:08:08.702 --rc geninfo_all_blocks=1 00:08:08.702 --rc geninfo_unexecuted_blocks=1 00:08:08.702 00:08:08.702 ' 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:08.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.702 --rc genhtml_branch_coverage=1 00:08:08.702 --rc genhtml_function_coverage=1 00:08:08.702 --rc genhtml_legend=1 00:08:08.702 --rc geninfo_all_blocks=1 00:08:08.702 --rc geninfo_unexecuted_blocks=1 00:08:08.702 00:08:08.702 ' 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:08.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.702 --rc genhtml_branch_coverage=1 00:08:08.702 --rc genhtml_function_coverage=1 00:08:08.702 --rc genhtml_legend=1 00:08:08.702 --rc geninfo_all_blocks=1 00:08:08.702 --rc geninfo_unexecuted_blocks=1 00:08:08.702 00:08:08.702 ' 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:08.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.702 --rc genhtml_branch_coverage=1 00:08:08.702 --rc genhtml_function_coverage=1 00:08:08.702 --rc genhtml_legend=1 00:08:08.702 --rc geninfo_all_blocks=1 00:08:08.702 --rc geninfo_unexecuted_blocks=1 00:08:08.702 00:08:08.702 ' 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:08.702 * First test run, liburing in use 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:08.702 ************************************ 00:08:08.702 START TEST dd_flag_append 00:08:08.702 ************************************ 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=g1touwscjz6rnh2ywkivkyfon3j12p97 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=hbd8wuweuigk0qidn5c40rnn861h1mi0 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s g1touwscjz6rnh2ywkivkyfon3j12p97 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s hbd8wuweuigk0qidn5c40rnn861h1mi0 00:08:08.702 09:10:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:08.961 [2024-12-13 09:10:02.622431] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:08.961 [2024-12-13 09:10:02.622865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63424 ] 00:08:08.961 [2024-12-13 09:10:02.800207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.220 [2024-12-13 09:10:02.890693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.220 [2024-12-13 09:10:03.034774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.478  [2024-12-13T09:10:04.305Z] Copying: 32/32 [B] (average 31 kBps) 00:08:10.415 00:08:10.415 ************************************ 00:08:10.415 END TEST dd_flag_append 00:08:10.415 ************************************ 00:08:10.415 09:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ hbd8wuweuigk0qidn5c40rnn861h1mi0g1touwscjz6rnh2ywkivkyfon3j12p97 == \h\b\d\8\w\u\w\e\u\i\g\k\0\q\i\d\n\5\c\4\0\r\n\n\8\6\1\h\1\m\i\0\g\1\t\o\u\w\s\c\j\z\6\r\n\h\2\y\w\k\i\v\k\y\f\o\n\3\j\1\2\p\9\7 ]] 00:08:10.415 00:08:10.415 real 0m1.483s 00:08:10.415 user 0m1.180s 00:08:10.415 sys 0m0.821s 00:08:10.415 09:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.415 09:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:10.415 ************************************ 00:08:10.415 START TEST dd_flag_directory 00:08:10.415 ************************************ 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.415 09:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.415 [2024-12-13 09:10:04.119006] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:10.415 [2024-12-13 09:10:04.119124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63459 ] 00:08:10.415 [2024-12-13 09:10:04.279690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.675 [2024-12-13 09:10:04.361127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.675 [2024-12-13 09:10:04.537172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.936 [2024-12-13 09:10:04.646543] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:10.936 [2024-12-13 09:10:04.646879] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:10.936 [2024-12-13 09:10:04.646926] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.503 [2024-12-13 09:10:05.256698] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.762 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.763 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.763 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.763 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.763 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.763 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:11.763 09:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:11.763 [2024-12-13 09:10:05.605408] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:11.763 [2024-12-13 09:10:05.605580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63475 ] 00:08:12.022 [2024-12-13 09:10:05.786420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.022 [2024-12-13 09:10:05.874518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.281 [2024-12-13 09:10:06.039031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.281 [2024-12-13 09:10:06.126450] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:12.281 [2024-12-13 09:10:06.126530] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:12.281 [2024-12-13 09:10:06.126552] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.848 [2024-12-13 09:10:06.725370] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:13.108 ************************************ 00:08:13.108 END TEST dd_flag_directory 00:08:13.108 ************************************ 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.108 00:08:13.108 real 0m2.905s 00:08:13.108 user 0m2.317s 00:08:13.108 sys 0m0.369s 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:13.108 ************************************ 00:08:13.108 START TEST dd_flag_nofollow 00:08:13.108 ************************************ 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:13.108 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:13.367 09:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.367 09:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.367 [2024-12-13 09:10:07.114043] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:13.367 [2024-12-13 09:10:07.114517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63521 ] 00:08:13.626 [2024-12-13 09:10:07.296749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.626 [2024-12-13 09:10:07.381171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.885 [2024-12-13 09:10:07.527731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.885 [2024-12-13 09:10:07.620124] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:13.885 [2024-12-13 09:10:07.620196] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:13.885 [2024-12-13 09:10:07.620235] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.453 [2024-12-13 09:10:08.270344] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:14.712 09:10:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:14.974 [2024-12-13 09:10:08.614526] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:14.974 [2024-12-13 09:10:08.614745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63537 ] 00:08:14.974 [2024-12-13 09:10:08.793008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.237 [2024-12-13 09:10:08.895259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.237 [2024-12-13 09:10:09.059048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.497 [2024-12-13 09:10:09.155623] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:15.497 [2024-12-13 09:10:09.155708] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:15.497 [2024-12-13 09:10:09.155734] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.065 [2024-12-13 09:10:09.797552] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:16.324 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:16.324 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:16.324 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:16.324 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:16.324 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:16.324 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:16.324 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:16.324 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:16.324 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:16.324 09:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.324 [2024-12-13 09:10:10.158662] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:16.324 [2024-12-13 09:10:10.158845] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63562 ] 00:08:16.583 [2024-12-13 09:10:10.338094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.584 [2024-12-13 09:10:10.425337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.843 [2024-12-13 09:10:10.579120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.843  [2024-12-13T09:10:11.671Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.781 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ peh1zeijs17nzfq5quvf91y9vc7kymdfkvca4ies2ejj2gyy78og4y7ljqmvzn8sligmfnd1x2wzcd4vdbno8z0zy60avbcz98uzucnpkbuonoc2woyywp2hz6auvcvnbm86n5qksptd8u9rz122yjc6yn9f8nyt7mufvgqqu0jyl7kziv8cshjcc4bbwi5saslj76l3s5lzht8lmn2vj22mbk3hixnv75ehi4p1lx9hzmjkdeeoad5jhhuwpllf8ynz68kucan08sbnrgeduzmbqufzcn9qbtamogsi6m1zcwmbc62j4wwg0wjiuvjh0s7ywp3lhhay8ah8p1xg6xjkuxnqvtuu326wzxtzsokuuw5yklgzfazhl2aksnpw2qfkmdtff614dyk6v9n0k1ybhy8xldbuupxbvnfj0acq7mkndhvitn1wpm4qszdt5aq9skk05i3lu1qavm4c7kr5ad8scevaemf60ku94nfzzm1vw2m5cjp83ksa7n0e == \p\e\h\1\z\e\i\j\s\1\7\n\z\f\q\5\q\u\v\f\9\1\y\9\v\c\7\k\y\m\d\f\k\v\c\a\4\i\e\s\2\e\j\j\2\g\y\y\7\8\o\g\4\y\7\l\j\q\m\v\z\n\8\s\l\i\g\m\f\n\d\1\x\2\w\z\c\d\4\v\d\b\n\o\8\z\0\z\y\6\0\a\v\b\c\z\9\8\u\z\u\c\n\p\k\b\u\o\n\o\c\2\w\o\y\y\w\p\2\h\z\6\a\u\v\c\v\n\b\m\8\6\n\5\q\k\s\p\t\d\8\u\9\r\z\1\2\2\y\j\c\6\y\n\9\f\8\n\y\t\7\m\u\f\v\g\q\q\u\0\j\y\l\7\k\z\i\v\8\c\s\h\j\c\c\4\b\b\w\i\5\s\a\s\l\j\7\6\l\3\s\5\l\z\h\t\8\l\m\n\2\v\j\2\2\m\b\k\3\h\i\x\n\v\7\5\e\h\i\4\p\1\l\x\9\h\z\m\j\k\d\e\e\o\a\d\5\j\h\h\u\w\p\l\l\f\8\y\n\z\6\8\k\u\c\a\n\0\8\s\b\n\r\g\e\d\u\z\m\b\q\u\f\z\c\n\9\q\b\t\a\m\o\g\s\i\6\m\1\z\c\w\m\b\c\6\2\j\4\w\w\g\0\w\j\i\u\v\j\h\0\s\7\y\w\p\3\l\h\h\a\y\8\a\h\8\p\1\x\g\6\x\j\k\u\x\n\q\v\t\u\u\3\2\6\w\z\x\t\z\s\o\k\u\u\w\5\y\k\l\g\z\f\a\z\h\l\2\a\k\s\n\p\w\2\q\f\k\m\d\t\f\f\6\1\4\d\y\k\6\v\9\n\0\k\1\y\b\h\y\8\x\l\d\b\u\u\p\x\b\v\n\f\j\0\a\c\q\7\m\k\n\d\h\v\i\t\n\1\w\p\m\4\q\s\z\d\t\5\a\q\9\s\k\k\0\5\i\3\l\u\1\q\a\v\m\4\c\7\k\r\5\a\d\8\s\c\e\v\a\e\m\f\6\0\k\u\9\4\n\f\z\z\m\1\v\w\2\m\5\c\j\p\8\3\k\s\a\7\n\0\e ]] 00:08:17.781 00:08:17.781 real 0m4.561s 00:08:17.781 user 0m3.664s 00:08:17.781 sys 0m1.208s 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.781 ************************************ 00:08:17.781 END TEST dd_flag_nofollow 00:08:17.781 ************************************ 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:17.781 ************************************ 00:08:17.781 START TEST dd_flag_noatime 00:08:17.781 ************************************ 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1734081010 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1734081011 00:08:17.781 09:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:19.160 09:10:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.160 [2024-12-13 09:10:12.720207] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:19.160 [2024-12-13 09:10:12.720382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63611 ] 00:08:19.160 [2024-12-13 09:10:12.882670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.160 [2024-12-13 09:10:12.969527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.419 [2024-12-13 09:10:13.125636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.419  [2024-12-13T09:10:14.248Z] Copying: 512/512 [B] (average 500 kBps) 00:08:20.358 00:08:20.358 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:20.358 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1734081010 )) 00:08:20.358 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.358 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1734081011 )) 00:08:20.358 09:10:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.358 [2024-12-13 09:10:14.189171] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:20.358 [2024-12-13 09:10:14.189343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63641 ] 00:08:20.617 [2024-12-13 09:10:14.353307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.617 [2024-12-13 09:10:14.450432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.876 [2024-12-13 09:10:14.625933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.876  [2024-12-13T09:10:15.706Z] Copying: 512/512 [B] (average 500 kBps) 00:08:21.816 00:08:21.816 09:10:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.816 09:10:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1734081014 )) 00:08:21.816 00:08:21.816 real 0m4.016s 00:08:21.816 user 0m2.421s 00:08:21.816 sys 0m1.692s 00:08:21.816 09:10:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.816 ************************************ 00:08:21.816 END TEST dd_flag_noatime 00:08:21.816 ************************************ 00:08:21.816 09:10:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:21.816 09:10:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:21.816 09:10:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.816 09:10:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.816 09:10:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:21.816 ************************************ 00:08:21.816 START TEST dd_flags_misc 00:08:21.817 ************************************ 00:08:21.817 09:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:21.817 09:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:21.817 09:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:21.817 09:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:21.817 09:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:21.817 09:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:21.817 09:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:21.817 09:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:21.817 09:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.817 09:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:22.075 [2024-12-13 09:10:15.795878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:22.075 [2024-12-13 09:10:15.796035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63677 ] 00:08:22.334 [2024-12-13 09:10:15.973334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.334 [2024-12-13 09:10:16.056392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.334 [2024-12-13 09:10:16.222048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.592  [2024-12-13T09:10:17.418Z] Copying: 512/512 [B] (average 500 kBps) 00:08:23.528 00:08:23.528 09:10:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ knpnddq0ksc2u1ljvall6tq2bu1lcqg7dizrr2duipwr29cwwqdx1w2os384m2sbvv7n80d4lrwpumqyfgh3i890q4tv5o0oji6015sqif1fcvd767dc7dq5vn7zk26b176h6q4q4lhrm4o0sxhw5slu07for9bp7plt8tacxszin0schxm832eb6v6nknmsoa8mi4769pykgm8z8ya9d43ljfx9izd4zprbqeac9hrzbxawfjezbp31oix9vjt5h6b75jdxih12fvyptr3gqnn1eyl0js8w7lb9ystzcoxmyt5thw2k6ozq51mivxnhwr04r1af3j4tz2g68lazghm71h6qwzvsmpuev5j4wl0kraekhj5o7soeeq85fuy0gngsddyqodk8ovlou5xxjm4ey2ir0nk38q60pz076fzsu2sr80zwlzmfh53wpiajci6xvfuse6nkqklmx986cr66t88bcvqgn6b1x8gsfs0llw95h3n3mdsfad81snen == \k\n\p\n\d\d\q\0\k\s\c\2\u\1\l\j\v\a\l\l\6\t\q\2\b\u\1\l\c\q\g\7\d\i\z\r\r\2\d\u\i\p\w\r\2\9\c\w\w\q\d\x\1\w\2\o\s\3\8\4\m\2\s\b\v\v\7\n\8\0\d\4\l\r\w\p\u\m\q\y\f\g\h\3\i\8\9\0\q\4\t\v\5\o\0\o\j\i\6\0\1\5\s\q\i\f\1\f\c\v\d\7\6\7\d\c\7\d\q\5\v\n\7\z\k\2\6\b\1\7\6\h\6\q\4\q\4\l\h\r\m\4\o\0\s\x\h\w\5\s\l\u\0\7\f\o\r\9\b\p\7\p\l\t\8\t\a\c\x\s\z\i\n\0\s\c\h\x\m\8\3\2\e\b\6\v\6\n\k\n\m\s\o\a\8\m\i\4\7\6\9\p\y\k\g\m\8\z\8\y\a\9\d\4\3\l\j\f\x\9\i\z\d\4\z\p\r\b\q\e\a\c\9\h\r\z\b\x\a\w\f\j\e\z\b\p\3\1\o\i\x\9\v\j\t\5\h\6\b\7\5\j\d\x\i\h\1\2\f\v\y\p\t\r\3\g\q\n\n\1\e\y\l\0\j\s\8\w\7\l\b\9\y\s\t\z\c\o\x\m\y\t\5\t\h\w\2\k\6\o\z\q\5\1\m\i\v\x\n\h\w\r\0\4\r\1\a\f\3\j\4\t\z\2\g\6\8\l\a\z\g\h\m\7\1\h\6\q\w\z\v\s\m\p\u\e\v\5\j\4\w\l\0\k\r\a\e\k\h\j\5\o\7\s\o\e\e\q\8\5\f\u\y\0\g\n\g\s\d\d\y\q\o\d\k\8\o\v\l\o\u\5\x\x\j\m\4\e\y\2\i\r\0\n\k\3\8\q\6\0\p\z\0\7\6\f\z\s\u\2\s\r\8\0\z\w\l\z\m\f\h\5\3\w\p\i\a\j\c\i\6\x\v\f\u\s\e\6\n\k\q\k\l\m\x\9\8\6\c\r\6\6\t\8\8\b\c\v\q\g\n\6\b\1\x\8\g\s\f\s\0\l\l\w\9\5\h\3\n\3\m\d\s\f\a\d\8\1\s\n\e\n ]] 00:08:23.528 09:10:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.528 09:10:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:23.528 [2024-12-13 09:10:17.261843] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:23.528 [2024-12-13 09:10:17.262018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63699 ] 00:08:23.787 [2024-12-13 09:10:17.441614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.787 [2024-12-13 09:10:17.522522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.787 [2024-12-13 09:10:17.667755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.046  [2024-12-13T09:10:18.874Z] Copying: 512/512 [B] (average 500 kBps) 00:08:24.984 00:08:24.984 09:10:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ knpnddq0ksc2u1ljvall6tq2bu1lcqg7dizrr2duipwr29cwwqdx1w2os384m2sbvv7n80d4lrwpumqyfgh3i890q4tv5o0oji6015sqif1fcvd767dc7dq5vn7zk26b176h6q4q4lhrm4o0sxhw5slu07for9bp7plt8tacxszin0schxm832eb6v6nknmsoa8mi4769pykgm8z8ya9d43ljfx9izd4zprbqeac9hrzbxawfjezbp31oix9vjt5h6b75jdxih12fvyptr3gqnn1eyl0js8w7lb9ystzcoxmyt5thw2k6ozq51mivxnhwr04r1af3j4tz2g68lazghm71h6qwzvsmpuev5j4wl0kraekhj5o7soeeq85fuy0gngsddyqodk8ovlou5xxjm4ey2ir0nk38q60pz076fzsu2sr80zwlzmfh53wpiajci6xvfuse6nkqklmx986cr66t88bcvqgn6b1x8gsfs0llw95h3n3mdsfad81snen == \k\n\p\n\d\d\q\0\k\s\c\2\u\1\l\j\v\a\l\l\6\t\q\2\b\u\1\l\c\q\g\7\d\i\z\r\r\2\d\u\i\p\w\r\2\9\c\w\w\q\d\x\1\w\2\o\s\3\8\4\m\2\s\b\v\v\7\n\8\0\d\4\l\r\w\p\u\m\q\y\f\g\h\3\i\8\9\0\q\4\t\v\5\o\0\o\j\i\6\0\1\5\s\q\i\f\1\f\c\v\d\7\6\7\d\c\7\d\q\5\v\n\7\z\k\2\6\b\1\7\6\h\6\q\4\q\4\l\h\r\m\4\o\0\s\x\h\w\5\s\l\u\0\7\f\o\r\9\b\p\7\p\l\t\8\t\a\c\x\s\z\i\n\0\s\c\h\x\m\8\3\2\e\b\6\v\6\n\k\n\m\s\o\a\8\m\i\4\7\6\9\p\y\k\g\m\8\z\8\y\a\9\d\4\3\l\j\f\x\9\i\z\d\4\z\p\r\b\q\e\a\c\9\h\r\z\b\x\a\w\f\j\e\z\b\p\3\1\o\i\x\9\v\j\t\5\h\6\b\7\5\j\d\x\i\h\1\2\f\v\y\p\t\r\3\g\q\n\n\1\e\y\l\0\j\s\8\w\7\l\b\9\y\s\t\z\c\o\x\m\y\t\5\t\h\w\2\k\6\o\z\q\5\1\m\i\v\x\n\h\w\r\0\4\r\1\a\f\3\j\4\t\z\2\g\6\8\l\a\z\g\h\m\7\1\h\6\q\w\z\v\s\m\p\u\e\v\5\j\4\w\l\0\k\r\a\e\k\h\j\5\o\7\s\o\e\e\q\8\5\f\u\y\0\g\n\g\s\d\d\y\q\o\d\k\8\o\v\l\o\u\5\x\x\j\m\4\e\y\2\i\r\0\n\k\3\8\q\6\0\p\z\0\7\6\f\z\s\u\2\s\r\8\0\z\w\l\z\m\f\h\5\3\w\p\i\a\j\c\i\6\x\v\f\u\s\e\6\n\k\q\k\l\m\x\9\8\6\c\r\6\6\t\8\8\b\c\v\q\g\n\6\b\1\x\8\g\s\f\s\0\l\l\w\9\5\h\3\n\3\m\d\s\f\a\d\8\1\s\n\e\n ]] 00:08:24.984 09:10:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.984 09:10:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:24.984 [2024-12-13 09:10:18.745244] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:24.984 [2024-12-13 09:10:18.745435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63720 ] 00:08:25.244 [2024-12-13 09:10:18.923487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.244 [2024-12-13 09:10:19.009107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.503 [2024-12-13 09:10:19.155931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.503  [2024-12-13T09:10:20.389Z] Copying: 512/512 [B] (average 125 kBps) 00:08:26.499 00:08:26.499 09:10:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ knpnddq0ksc2u1ljvall6tq2bu1lcqg7dizrr2duipwr29cwwqdx1w2os384m2sbvv7n80d4lrwpumqyfgh3i890q4tv5o0oji6015sqif1fcvd767dc7dq5vn7zk26b176h6q4q4lhrm4o0sxhw5slu07for9bp7plt8tacxszin0schxm832eb6v6nknmsoa8mi4769pykgm8z8ya9d43ljfx9izd4zprbqeac9hrzbxawfjezbp31oix9vjt5h6b75jdxih12fvyptr3gqnn1eyl0js8w7lb9ystzcoxmyt5thw2k6ozq51mivxnhwr04r1af3j4tz2g68lazghm71h6qwzvsmpuev5j4wl0kraekhj5o7soeeq85fuy0gngsddyqodk8ovlou5xxjm4ey2ir0nk38q60pz076fzsu2sr80zwlzmfh53wpiajci6xvfuse6nkqklmx986cr66t88bcvqgn6b1x8gsfs0llw95h3n3mdsfad81snen == \k\n\p\n\d\d\q\0\k\s\c\2\u\1\l\j\v\a\l\l\6\t\q\2\b\u\1\l\c\q\g\7\d\i\z\r\r\2\d\u\i\p\w\r\2\9\c\w\w\q\d\x\1\w\2\o\s\3\8\4\m\2\s\b\v\v\7\n\8\0\d\4\l\r\w\p\u\m\q\y\f\g\h\3\i\8\9\0\q\4\t\v\5\o\0\o\j\i\6\0\1\5\s\q\i\f\1\f\c\v\d\7\6\7\d\c\7\d\q\5\v\n\7\z\k\2\6\b\1\7\6\h\6\q\4\q\4\l\h\r\m\4\o\0\s\x\h\w\5\s\l\u\0\7\f\o\r\9\b\p\7\p\l\t\8\t\a\c\x\s\z\i\n\0\s\c\h\x\m\8\3\2\e\b\6\v\6\n\k\n\m\s\o\a\8\m\i\4\7\6\9\p\y\k\g\m\8\z\8\y\a\9\d\4\3\l\j\f\x\9\i\z\d\4\z\p\r\b\q\e\a\c\9\h\r\z\b\x\a\w\f\j\e\z\b\p\3\1\o\i\x\9\v\j\t\5\h\6\b\7\5\j\d\x\i\h\1\2\f\v\y\p\t\r\3\g\q\n\n\1\e\y\l\0\j\s\8\w\7\l\b\9\y\s\t\z\c\o\x\m\y\t\5\t\h\w\2\k\6\o\z\q\5\1\m\i\v\x\n\h\w\r\0\4\r\1\a\f\3\j\4\t\z\2\g\6\8\l\a\z\g\h\m\7\1\h\6\q\w\z\v\s\m\p\u\e\v\5\j\4\w\l\0\k\r\a\e\k\h\j\5\o\7\s\o\e\e\q\8\5\f\u\y\0\g\n\g\s\d\d\y\q\o\d\k\8\o\v\l\o\u\5\x\x\j\m\4\e\y\2\i\r\0\n\k\3\8\q\6\0\p\z\0\7\6\f\z\s\u\2\s\r\8\0\z\w\l\z\m\f\h\5\3\w\p\i\a\j\c\i\6\x\v\f\u\s\e\6\n\k\q\k\l\m\x\9\8\6\c\r\6\6\t\8\8\b\c\v\q\g\n\6\b\1\x\8\g\s\f\s\0\l\l\w\9\5\h\3\n\3\m\d\s\f\a\d\8\1\s\n\e\n ]] 00:08:26.499 09:10:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:26.499 09:10:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:26.499 [2024-12-13 09:10:20.339331] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:26.499 [2024-12-13 09:10:20.339501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63742 ] 00:08:26.758 [2024-12-13 09:10:20.512627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.758 [2024-12-13 09:10:20.610380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.017 [2024-12-13 09:10:20.760288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.017  [2024-12-13T09:10:21.845Z] Copying: 512/512 [B] (average 166 kBps) 00:08:27.955 00:08:27.955 09:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ knpnddq0ksc2u1ljvall6tq2bu1lcqg7dizrr2duipwr29cwwqdx1w2os384m2sbvv7n80d4lrwpumqyfgh3i890q4tv5o0oji6015sqif1fcvd767dc7dq5vn7zk26b176h6q4q4lhrm4o0sxhw5slu07for9bp7plt8tacxszin0schxm832eb6v6nknmsoa8mi4769pykgm8z8ya9d43ljfx9izd4zprbqeac9hrzbxawfjezbp31oix9vjt5h6b75jdxih12fvyptr3gqnn1eyl0js8w7lb9ystzcoxmyt5thw2k6ozq51mivxnhwr04r1af3j4tz2g68lazghm71h6qwzvsmpuev5j4wl0kraekhj5o7soeeq85fuy0gngsddyqodk8ovlou5xxjm4ey2ir0nk38q60pz076fzsu2sr80zwlzmfh53wpiajci6xvfuse6nkqklmx986cr66t88bcvqgn6b1x8gsfs0llw95h3n3mdsfad81snen == \k\n\p\n\d\d\q\0\k\s\c\2\u\1\l\j\v\a\l\l\6\t\q\2\b\u\1\l\c\q\g\7\d\i\z\r\r\2\d\u\i\p\w\r\2\9\c\w\w\q\d\x\1\w\2\o\s\3\8\4\m\2\s\b\v\v\7\n\8\0\d\4\l\r\w\p\u\m\q\y\f\g\h\3\i\8\9\0\q\4\t\v\5\o\0\o\j\i\6\0\1\5\s\q\i\f\1\f\c\v\d\7\6\7\d\c\7\d\q\5\v\n\7\z\k\2\6\b\1\7\6\h\6\q\4\q\4\l\h\r\m\4\o\0\s\x\h\w\5\s\l\u\0\7\f\o\r\9\b\p\7\p\l\t\8\t\a\c\x\s\z\i\n\0\s\c\h\x\m\8\3\2\e\b\6\v\6\n\k\n\m\s\o\a\8\m\i\4\7\6\9\p\y\k\g\m\8\z\8\y\a\9\d\4\3\l\j\f\x\9\i\z\d\4\z\p\r\b\q\e\a\c\9\h\r\z\b\x\a\w\f\j\e\z\b\p\3\1\o\i\x\9\v\j\t\5\h\6\b\7\5\j\d\x\i\h\1\2\f\v\y\p\t\r\3\g\q\n\n\1\e\y\l\0\j\s\8\w\7\l\b\9\y\s\t\z\c\o\x\m\y\t\5\t\h\w\2\k\6\o\z\q\5\1\m\i\v\x\n\h\w\r\0\4\r\1\a\f\3\j\4\t\z\2\g\6\8\l\a\z\g\h\m\7\1\h\6\q\w\z\v\s\m\p\u\e\v\5\j\4\w\l\0\k\r\a\e\k\h\j\5\o\7\s\o\e\e\q\8\5\f\u\y\0\g\n\g\s\d\d\y\q\o\d\k\8\o\v\l\o\u\5\x\x\j\m\4\e\y\2\i\r\0\n\k\3\8\q\6\0\p\z\0\7\6\f\z\s\u\2\s\r\8\0\z\w\l\z\m\f\h\5\3\w\p\i\a\j\c\i\6\x\v\f\u\s\e\6\n\k\q\k\l\m\x\9\8\6\c\r\6\6\t\8\8\b\c\v\q\g\n\6\b\1\x\8\g\s\f\s\0\l\l\w\9\5\h\3\n\3\m\d\s\f\a\d\8\1\s\n\e\n ]] 00:08:27.955 09:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:27.955 09:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:27.955 09:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:27.955 09:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:27.955 09:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.955 09:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:28.214 [2024-12-13 09:10:21.846622] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:28.214 [2024-12-13 09:10:21.846793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63763 ] 00:08:28.214 [2024-12-13 09:10:22.024329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.473 [2024-12-13 09:10:22.118589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.473 [2024-12-13 09:10:22.262860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.473  [2024-12-13T09:10:23.301Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.411 00:08:29.411 09:10:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ty3yovqijuxb2jgsh2ypx2pg6j39szwttbyoe03ut7j2is4xu6mon2wx8hy2d8rphjp87r50tgbw8e90hyar5w5hb3xvgi4yq8mvrvxf65hylstql12kfqe5p806tma2qy900fu6n4oxib4623nueeic1dqhuqu7wh8z1q8yn5sb63eu49lsm4cj66efgbswdqwl31oxtjwfp16c9lydzrtpf35vl0rfv6ulrqdf4y3f6qv9dzzrwj6iqs109utnfznifr2gftbr6gdefyvknfyaxxbb64ppm4pt292qzr5sutpdqibf3el6auz8hzypzrpe1fqgd639sbqq9px2dlgsk0ca0lcmbml1p7xqiv3fkm8sid5o46bguc890p4svg54o21nvphgl4nyds7ylxt68131adkb2fmsys19n7z1vzdkt3lszhe9cnxhh2cfesuu65lf2t9qra8n42lpvhqrhx4qw74tw4mqtzepa8juq8cjhw5li2wg77ip6hlm == \t\y\3\y\o\v\q\i\j\u\x\b\2\j\g\s\h\2\y\p\x\2\p\g\6\j\3\9\s\z\w\t\t\b\y\o\e\0\3\u\t\7\j\2\i\s\4\x\u\6\m\o\n\2\w\x\8\h\y\2\d\8\r\p\h\j\p\8\7\r\5\0\t\g\b\w\8\e\9\0\h\y\a\r\5\w\5\h\b\3\x\v\g\i\4\y\q\8\m\v\r\v\x\f\6\5\h\y\l\s\t\q\l\1\2\k\f\q\e\5\p\8\0\6\t\m\a\2\q\y\9\0\0\f\u\6\n\4\o\x\i\b\4\6\2\3\n\u\e\e\i\c\1\d\q\h\u\q\u\7\w\h\8\z\1\q\8\y\n\5\s\b\6\3\e\u\4\9\l\s\m\4\c\j\6\6\e\f\g\b\s\w\d\q\w\l\3\1\o\x\t\j\w\f\p\1\6\c\9\l\y\d\z\r\t\p\f\3\5\v\l\0\r\f\v\6\u\l\r\q\d\f\4\y\3\f\6\q\v\9\d\z\z\r\w\j\6\i\q\s\1\0\9\u\t\n\f\z\n\i\f\r\2\g\f\t\b\r\6\g\d\e\f\y\v\k\n\f\y\a\x\x\b\b\6\4\p\p\m\4\p\t\2\9\2\q\z\r\5\s\u\t\p\d\q\i\b\f\3\e\l\6\a\u\z\8\h\z\y\p\z\r\p\e\1\f\q\g\d\6\3\9\s\b\q\q\9\p\x\2\d\l\g\s\k\0\c\a\0\l\c\m\b\m\l\1\p\7\x\q\i\v\3\f\k\m\8\s\i\d\5\o\4\6\b\g\u\c\8\9\0\p\4\s\v\g\5\4\o\2\1\n\v\p\h\g\l\4\n\y\d\s\7\y\l\x\t\6\8\1\3\1\a\d\k\b\2\f\m\s\y\s\1\9\n\7\z\1\v\z\d\k\t\3\l\s\z\h\e\9\c\n\x\h\h\2\c\f\e\s\u\u\6\5\l\f\2\t\9\q\r\a\8\n\4\2\l\p\v\h\q\r\h\x\4\q\w\7\4\t\w\4\m\q\t\z\e\p\a\8\j\u\q\8\c\j\h\w\5\l\i\2\w\g\7\7\i\p\6\h\l\m ]] 00:08:29.411 09:10:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:29.411 09:10:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:29.670 [2024-12-13 09:10:23.347506] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:29.670 [2024-12-13 09:10:23.347672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63785 ] 00:08:29.670 [2024-12-13 09:10:23.525649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.929 [2024-12-13 09:10:23.613033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.929 [2024-12-13 09:10:23.789529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.188  [2024-12-13T09:10:25.016Z] Copying: 512/512 [B] (average 500 kBps) 00:08:31.126 00:08:31.126 09:10:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ty3yovqijuxb2jgsh2ypx2pg6j39szwttbyoe03ut7j2is4xu6mon2wx8hy2d8rphjp87r50tgbw8e90hyar5w5hb3xvgi4yq8mvrvxf65hylstql12kfqe5p806tma2qy900fu6n4oxib4623nueeic1dqhuqu7wh8z1q8yn5sb63eu49lsm4cj66efgbswdqwl31oxtjwfp16c9lydzrtpf35vl0rfv6ulrqdf4y3f6qv9dzzrwj6iqs109utnfznifr2gftbr6gdefyvknfyaxxbb64ppm4pt292qzr5sutpdqibf3el6auz8hzypzrpe1fqgd639sbqq9px2dlgsk0ca0lcmbml1p7xqiv3fkm8sid5o46bguc890p4svg54o21nvphgl4nyds7ylxt68131adkb2fmsys19n7z1vzdkt3lszhe9cnxhh2cfesuu65lf2t9qra8n42lpvhqrhx4qw74tw4mqtzepa8juq8cjhw5li2wg77ip6hlm == \t\y\3\y\o\v\q\i\j\u\x\b\2\j\g\s\h\2\y\p\x\2\p\g\6\j\3\9\s\z\w\t\t\b\y\o\e\0\3\u\t\7\j\2\i\s\4\x\u\6\m\o\n\2\w\x\8\h\y\2\d\8\r\p\h\j\p\8\7\r\5\0\t\g\b\w\8\e\9\0\h\y\a\r\5\w\5\h\b\3\x\v\g\i\4\y\q\8\m\v\r\v\x\f\6\5\h\y\l\s\t\q\l\1\2\k\f\q\e\5\p\8\0\6\t\m\a\2\q\y\9\0\0\f\u\6\n\4\o\x\i\b\4\6\2\3\n\u\e\e\i\c\1\d\q\h\u\q\u\7\w\h\8\z\1\q\8\y\n\5\s\b\6\3\e\u\4\9\l\s\m\4\c\j\6\6\e\f\g\b\s\w\d\q\w\l\3\1\o\x\t\j\w\f\p\1\6\c\9\l\y\d\z\r\t\p\f\3\5\v\l\0\r\f\v\6\u\l\r\q\d\f\4\y\3\f\6\q\v\9\d\z\z\r\w\j\6\i\q\s\1\0\9\u\t\n\f\z\n\i\f\r\2\g\f\t\b\r\6\g\d\e\f\y\v\k\n\f\y\a\x\x\b\b\6\4\p\p\m\4\p\t\2\9\2\q\z\r\5\s\u\t\p\d\q\i\b\f\3\e\l\6\a\u\z\8\h\z\y\p\z\r\p\e\1\f\q\g\d\6\3\9\s\b\q\q\9\p\x\2\d\l\g\s\k\0\c\a\0\l\c\m\b\m\l\1\p\7\x\q\i\v\3\f\k\m\8\s\i\d\5\o\4\6\b\g\u\c\8\9\0\p\4\s\v\g\5\4\o\2\1\n\v\p\h\g\l\4\n\y\d\s\7\y\l\x\t\6\8\1\3\1\a\d\k\b\2\f\m\s\y\s\1\9\n\7\z\1\v\z\d\k\t\3\l\s\z\h\e\9\c\n\x\h\h\2\c\f\e\s\u\u\6\5\l\f\2\t\9\q\r\a\8\n\4\2\l\p\v\h\q\r\h\x\4\q\w\7\4\t\w\4\m\q\t\z\e\p\a\8\j\u\q\8\c\j\h\w\5\l\i\2\w\g\7\7\i\p\6\h\l\m ]] 00:08:31.126 09:10:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:31.126 09:10:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:31.126 [2024-12-13 09:10:24.869957] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:31.126 [2024-12-13 09:10:24.870135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63805 ] 00:08:31.385 [2024-12-13 09:10:25.033198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.385 [2024-12-13 09:10:25.123790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.644 [2024-12-13 09:10:25.290458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.644  [2024-12-13T09:10:26.471Z] Copying: 512/512 [B] (average 166 kBps) 00:08:32.581 00:08:32.581 09:10:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ty3yovqijuxb2jgsh2ypx2pg6j39szwttbyoe03ut7j2is4xu6mon2wx8hy2d8rphjp87r50tgbw8e90hyar5w5hb3xvgi4yq8mvrvxf65hylstql12kfqe5p806tma2qy900fu6n4oxib4623nueeic1dqhuqu7wh8z1q8yn5sb63eu49lsm4cj66efgbswdqwl31oxtjwfp16c9lydzrtpf35vl0rfv6ulrqdf4y3f6qv9dzzrwj6iqs109utnfznifr2gftbr6gdefyvknfyaxxbb64ppm4pt292qzr5sutpdqibf3el6auz8hzypzrpe1fqgd639sbqq9px2dlgsk0ca0lcmbml1p7xqiv3fkm8sid5o46bguc890p4svg54o21nvphgl4nyds7ylxt68131adkb2fmsys19n7z1vzdkt3lszhe9cnxhh2cfesuu65lf2t9qra8n42lpvhqrhx4qw74tw4mqtzepa8juq8cjhw5li2wg77ip6hlm == \t\y\3\y\o\v\q\i\j\u\x\b\2\j\g\s\h\2\y\p\x\2\p\g\6\j\3\9\s\z\w\t\t\b\y\o\e\0\3\u\t\7\j\2\i\s\4\x\u\6\m\o\n\2\w\x\8\h\y\2\d\8\r\p\h\j\p\8\7\r\5\0\t\g\b\w\8\e\9\0\h\y\a\r\5\w\5\h\b\3\x\v\g\i\4\y\q\8\m\v\r\v\x\f\6\5\h\y\l\s\t\q\l\1\2\k\f\q\e\5\p\8\0\6\t\m\a\2\q\y\9\0\0\f\u\6\n\4\o\x\i\b\4\6\2\3\n\u\e\e\i\c\1\d\q\h\u\q\u\7\w\h\8\z\1\q\8\y\n\5\s\b\6\3\e\u\4\9\l\s\m\4\c\j\6\6\e\f\g\b\s\w\d\q\w\l\3\1\o\x\t\j\w\f\p\1\6\c\9\l\y\d\z\r\t\p\f\3\5\v\l\0\r\f\v\6\u\l\r\q\d\f\4\y\3\f\6\q\v\9\d\z\z\r\w\j\6\i\q\s\1\0\9\u\t\n\f\z\n\i\f\r\2\g\f\t\b\r\6\g\d\e\f\y\v\k\n\f\y\a\x\x\b\b\6\4\p\p\m\4\p\t\2\9\2\q\z\r\5\s\u\t\p\d\q\i\b\f\3\e\l\6\a\u\z\8\h\z\y\p\z\r\p\e\1\f\q\g\d\6\3\9\s\b\q\q\9\p\x\2\d\l\g\s\k\0\c\a\0\l\c\m\b\m\l\1\p\7\x\q\i\v\3\f\k\m\8\s\i\d\5\o\4\6\b\g\u\c\8\9\0\p\4\s\v\g\5\4\o\2\1\n\v\p\h\g\l\4\n\y\d\s\7\y\l\x\t\6\8\1\3\1\a\d\k\b\2\f\m\s\y\s\1\9\n\7\z\1\v\z\d\k\t\3\l\s\z\h\e\9\c\n\x\h\h\2\c\f\e\s\u\u\6\5\l\f\2\t\9\q\r\a\8\n\4\2\l\p\v\h\q\r\h\x\4\q\w\7\4\t\w\4\m\q\t\z\e\p\a\8\j\u\q\8\c\j\h\w\5\l\i\2\w\g\7\7\i\p\6\h\l\m ]] 00:08:32.581 09:10:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:32.581 09:10:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:32.581 [2024-12-13 09:10:26.402070] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:32.581 [2024-12-13 09:10:26.402237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63828 ] 00:08:32.840 [2024-12-13 09:10:26.565627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.840 [2024-12-13 09:10:26.662421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.099 [2024-12-13 09:10:26.820434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.099  [2024-12-13T09:10:27.926Z] Copying: 512/512 [B] (average 166 kBps) 00:08:34.036 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ty3yovqijuxb2jgsh2ypx2pg6j39szwttbyoe03ut7j2is4xu6mon2wx8hy2d8rphjp87r50tgbw8e90hyar5w5hb3xvgi4yq8mvrvxf65hylstql12kfqe5p806tma2qy900fu6n4oxib4623nueeic1dqhuqu7wh8z1q8yn5sb63eu49lsm4cj66efgbswdqwl31oxtjwfp16c9lydzrtpf35vl0rfv6ulrqdf4y3f6qv9dzzrwj6iqs109utnfznifr2gftbr6gdefyvknfyaxxbb64ppm4pt292qzr5sutpdqibf3el6auz8hzypzrpe1fqgd639sbqq9px2dlgsk0ca0lcmbml1p7xqiv3fkm8sid5o46bguc890p4svg54o21nvphgl4nyds7ylxt68131adkb2fmsys19n7z1vzdkt3lszhe9cnxhh2cfesuu65lf2t9qra8n42lpvhqrhx4qw74tw4mqtzepa8juq8cjhw5li2wg77ip6hlm == \t\y\3\y\o\v\q\i\j\u\x\b\2\j\g\s\h\2\y\p\x\2\p\g\6\j\3\9\s\z\w\t\t\b\y\o\e\0\3\u\t\7\j\2\i\s\4\x\u\6\m\o\n\2\w\x\8\h\y\2\d\8\r\p\h\j\p\8\7\r\5\0\t\g\b\w\8\e\9\0\h\y\a\r\5\w\5\h\b\3\x\v\g\i\4\y\q\8\m\v\r\v\x\f\6\5\h\y\l\s\t\q\l\1\2\k\f\q\e\5\p\8\0\6\t\m\a\2\q\y\9\0\0\f\u\6\n\4\o\x\i\b\4\6\2\3\n\u\e\e\i\c\1\d\q\h\u\q\u\7\w\h\8\z\1\q\8\y\n\5\s\b\6\3\e\u\4\9\l\s\m\4\c\j\6\6\e\f\g\b\s\w\d\q\w\l\3\1\o\x\t\j\w\f\p\1\6\c\9\l\y\d\z\r\t\p\f\3\5\v\l\0\r\f\v\6\u\l\r\q\d\f\4\y\3\f\6\q\v\9\d\z\z\r\w\j\6\i\q\s\1\0\9\u\t\n\f\z\n\i\f\r\2\g\f\t\b\r\6\g\d\e\f\y\v\k\n\f\y\a\x\x\b\b\6\4\p\p\m\4\p\t\2\9\2\q\z\r\5\s\u\t\p\d\q\i\b\f\3\e\l\6\a\u\z\8\h\z\y\p\z\r\p\e\1\f\q\g\d\6\3\9\s\b\q\q\9\p\x\2\d\l\g\s\k\0\c\a\0\l\c\m\b\m\l\1\p\7\x\q\i\v\3\f\k\m\8\s\i\d\5\o\4\6\b\g\u\c\8\9\0\p\4\s\v\g\5\4\o\2\1\n\v\p\h\g\l\4\n\y\d\s\7\y\l\x\t\6\8\1\3\1\a\d\k\b\2\f\m\s\y\s\1\9\n\7\z\1\v\z\d\k\t\3\l\s\z\h\e\9\c\n\x\h\h\2\c\f\e\s\u\u\6\5\l\f\2\t\9\q\r\a\8\n\4\2\l\p\v\h\q\r\h\x\4\q\w\7\4\t\w\4\m\q\t\z\e\p\a\8\j\u\q\8\c\j\h\w\5\l\i\2\w\g\7\7\i\p\6\h\l\m ]] 00:08:34.036 00:08:34.036 real 0m12.162s 00:08:34.036 user 0m9.750s 00:08:34.036 sys 0m6.800s 00:08:34.036 ************************************ 00:08:34.036 END TEST dd_flags_misc 00:08:34.036 ************************************ 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:34.036 * Second test run, disabling liburing, forcing AIO 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:34.036 ************************************ 00:08:34.036 START TEST dd_flag_append_forced_aio 00:08:34.036 ************************************ 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=blysi9ix1h9mhj654w2blkdvoucv2nbt 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=bm75o6er77ib47xtnjs1n8rrv1muu54g 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s blysi9ix1h9mhj654w2blkdvoucv2nbt 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s bm75o6er77ib47xtnjs1n8rrv1muu54g 00:08:34.036 09:10:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:34.295 [2024-12-13 09:10:28.020795] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:34.295 [2024-12-13 09:10:28.021166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63863 ] 00:08:34.554 [2024-12-13 09:10:28.205205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.554 [2024-12-13 09:10:28.293409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.813 [2024-12-13 09:10:28.444689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.813  [2024-12-13T09:10:29.640Z] Copying: 32/32 [B] (average 31 kBps) 00:08:35.750 00:08:35.750 ************************************ 00:08:35.750 END TEST dd_flag_append_forced_aio 00:08:35.750 ************************************ 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ bm75o6er77ib47xtnjs1n8rrv1muu54gblysi9ix1h9mhj654w2blkdvoucv2nbt == \b\m\7\5\o\6\e\r\7\7\i\b\4\7\x\t\n\j\s\1\n\8\r\r\v\1\m\u\u\5\4\g\b\l\y\s\i\9\i\x\1\h\9\m\h\j\6\5\4\w\2\b\l\k\d\v\o\u\c\v\2\n\b\t ]] 00:08:35.750 00:08:35.750 real 0m1.527s 00:08:35.750 user 0m1.203s 00:08:35.750 sys 0m0.202s 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:35.750 ************************************ 00:08:35.750 START TEST dd_flag_directory_forced_aio 00:08:35.750 ************************************ 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.750 09:10:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.750 [2024-12-13 09:10:29.594658] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:35.750 [2024-12-13 09:10:29.594822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63907 ] 00:08:36.009 [2024-12-13 09:10:29.773641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.009 [2024-12-13 09:10:29.859865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.313 [2024-12-13 09:10:30.017459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.313 [2024-12-13 09:10:30.116331] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:36.313 [2024-12-13 09:10:30.116417] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:36.313 [2024-12-13 09:10:30.116443] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.886 [2024-12-13 09:10:30.752500] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:37.145 09:10:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:37.403 [2024-12-13 09:10:31.094556] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:37.404 [2024-12-13 09:10:31.095037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63923 ] 00:08:37.404 [2024-12-13 09:10:31.276952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.662 [2024-12-13 09:10:31.372099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.662 [2024-12-13 09:10:31.532778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.920 [2024-12-13 09:10:31.625995] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:37.920 [2024-12-13 09:10:31.626061] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:37.920 [2024-12-13 09:10:31.626101] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:38.486 [2024-12-13 09:10:32.282074] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:38.744 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:38.744 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:38.745 00:08:38.745 real 0m3.035s 00:08:38.745 user 0m2.439s 00:08:38.745 sys 0m0.377s 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.745 ************************************ 00:08:38.745 END TEST dd_flag_directory_forced_aio 00:08:38.745 ************************************ 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:38.745 ************************************ 00:08:38.745 START TEST dd_flag_nofollow_forced_aio 00:08:38.745 ************************************ 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:38.745 09:10:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:39.004 [2024-12-13 09:10:32.701205] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:39.004 [2024-12-13 09:10:32.701716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63969 ] 00:08:39.004 [2024-12-13 09:10:32.885284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.262 [2024-12-13 09:10:32.983474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.262 [2024-12-13 09:10:33.141058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.521 [2024-12-13 09:10:33.246421] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:39.521 [2024-12-13 09:10:33.246501] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:39.521 [2024-12-13 09:10:33.246525] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.088 [2024-12-13 09:10:33.907191] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.346 09:10:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:40.605 [2024-12-13 09:10:34.255735] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:40.605 [2024-12-13 09:10:34.256125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63985 ] 00:08:40.605 [2024-12-13 09:10:34.432221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.863 [2024-12-13 09:10:34.520929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.863 [2024-12-13 09:10:34.673517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.121 [2024-12-13 09:10:34.763856] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:41.121 [2024-12-13 09:10:34.763939] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:41.121 [2024-12-13 09:10:34.763961] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.690 [2024-12-13 09:10:35.386199] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:41.948 09:10:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:41.948 09:10:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.948 09:10:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:41.948 09:10:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:41.948 09:10:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:41.948 09:10:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.948 09:10:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:41.948 09:10:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:41.948 09:10:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:41.948 09:10:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.949 [2024-12-13 09:10:35.716325] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:41.949 [2024-12-13 09:10:35.716731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64010 ] 00:08:42.207 [2024-12-13 09:10:35.878335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.207 [2024-12-13 09:10:35.966822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.466 [2024-12-13 09:10:36.127404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.466  [2024-12-13T09:10:37.292Z] Copying: 512/512 [B] (average 500 kBps) 00:08:43.402 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ njcfiovwjem5vldmgqq3m559pc4xr0ksc04oknmrzqrev0ri10kny851skw9emt9y6b2q82oyxvtfsqxq44x2l12d2v59n0fcox4gbk2svfraznvuhsd2fn07ino1jencgrwno3cgzsm5wmdojlpn2afu0lddx8s0i8s8sbmlda57qjl9ynqpbtpoyxhng0ljpibyxwo1e7rgu4r93s0lardi7r8u14roj9c2aukp17wen4fjbbqjaa1q5uh7qb4j177idrg0vabj3i9c23k5hb357pzjtq0t5ofl7ta38fr6ayuj8x9t4lim2j37sqq7pn5tdy2g9yppws4ntppey620l6wq40ay0wj2j2nrvw5il4nd2oqjhumz0r609yp90k5m2crdbwy7uqm5bntdvv8j41z5x9xqz56zil551zksi5pk7yi1212kklppw1vq9yq1wt1ypvzpr56jf1p1jvga5zfocqnp8tb7z5go5yaw4216zd1shrf5nckwsi0 == \n\j\c\f\i\o\v\w\j\e\m\5\v\l\d\m\g\q\q\3\m\5\5\9\p\c\4\x\r\0\k\s\c\0\4\o\k\n\m\r\z\q\r\e\v\0\r\i\1\0\k\n\y\8\5\1\s\k\w\9\e\m\t\9\y\6\b\2\q\8\2\o\y\x\v\t\f\s\q\x\q\4\4\x\2\l\1\2\d\2\v\5\9\n\0\f\c\o\x\4\g\b\k\2\s\v\f\r\a\z\n\v\u\h\s\d\2\f\n\0\7\i\n\o\1\j\e\n\c\g\r\w\n\o\3\c\g\z\s\m\5\w\m\d\o\j\l\p\n\2\a\f\u\0\l\d\d\x\8\s\0\i\8\s\8\s\b\m\l\d\a\5\7\q\j\l\9\y\n\q\p\b\t\p\o\y\x\h\n\g\0\l\j\p\i\b\y\x\w\o\1\e\7\r\g\u\4\r\9\3\s\0\l\a\r\d\i\7\r\8\u\1\4\r\o\j\9\c\2\a\u\k\p\1\7\w\e\n\4\f\j\b\b\q\j\a\a\1\q\5\u\h\7\q\b\4\j\1\7\7\i\d\r\g\0\v\a\b\j\3\i\9\c\2\3\k\5\h\b\3\5\7\p\z\j\t\q\0\t\5\o\f\l\7\t\a\3\8\f\r\6\a\y\u\j\8\x\9\t\4\l\i\m\2\j\3\7\s\q\q\7\p\n\5\t\d\y\2\g\9\y\p\p\w\s\4\n\t\p\p\e\y\6\2\0\l\6\w\q\4\0\a\y\0\w\j\2\j\2\n\r\v\w\5\i\l\4\n\d\2\o\q\j\h\u\m\z\0\r\6\0\9\y\p\9\0\k\5\m\2\c\r\d\b\w\y\7\u\q\m\5\b\n\t\d\v\v\8\j\4\1\z\5\x\9\x\q\z\5\6\z\i\l\5\5\1\z\k\s\i\5\p\k\7\y\i\1\2\1\2\k\k\l\p\p\w\1\v\q\9\y\q\1\w\t\1\y\p\v\z\p\r\5\6\j\f\1\p\1\j\v\g\a\5\z\f\o\c\q\n\p\8\t\b\7\z\5\g\o\5\y\a\w\4\2\1\6\z\d\1\s\h\r\f\5\n\c\k\w\s\i\0 ]] 00:08:43.402 00:08:43.402 real 0m4.517s 00:08:43.402 user 0m3.618s 00:08:43.402 sys 0m0.555s 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.402 ************************************ 00:08:43.402 END TEST dd_flag_nofollow_forced_aio 00:08:43.402 ************************************ 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:43.402 ************************************ 00:08:43.402 START TEST dd_flag_noatime_forced_aio 00:08:43.402 ************************************ 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1734081036 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1734081037 00:08:43.402 09:10:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:44.337 09:10:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.595 [2024-12-13 09:10:38.290210] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:44.596 [2024-12-13 09:10:38.290418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64057 ] 00:08:44.596 [2024-12-13 09:10:38.470417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.854 [2024-12-13 09:10:38.564488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.854 [2024-12-13 09:10:38.731701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.112  [2024-12-13T09:10:39.941Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.051 00:08:46.051 09:10:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:46.051 09:10:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1734081036 )) 00:08:46.051 09:10:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.051 09:10:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1734081037 )) 00:08:46.051 09:10:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.051 [2024-12-13 09:10:39.873053] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:46.051 [2024-12-13 09:10:39.873227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64086 ] 00:08:46.315 [2024-12-13 09:10:40.051790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.315 [2024-12-13 09:10:40.139324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.587 [2024-12-13 09:10:40.304669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.587  [2024-12-13T09:10:41.413Z] Copying: 512/512 [B] (average 500 kBps) 00:08:47.523 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:47.523 ************************************ 00:08:47.523 END TEST dd_flag_noatime_forced_aio 00:08:47.523 ************************************ 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1734081040 )) 00:08:47.523 00:08:47.523 real 0m4.116s 00:08:47.523 user 0m2.432s 00:08:47.523 sys 0m0.438s 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:47.523 ************************************ 00:08:47.523 START TEST dd_flags_misc_forced_aio 00:08:47.523 ************************************ 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.523 09:10:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:47.782 [2024-12-13 09:10:41.450447] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:47.782 [2024-12-13 09:10:41.450637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64121 ] 00:08:47.782 [2024-12-13 09:10:41.634005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.041 [2024-12-13 09:10:41.726016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.041 [2024-12-13 09:10:41.887158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.300  [2024-12-13T09:10:43.126Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.236 00:08:49.237 09:10:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cfimrlhx0ylekws2jz88uql5v19vtn3liz9kq4uwm0x4f46vn2a8xvc9cdwhjzyqsmc380g6y395t9llo7ptzx9mpv2jp7to3yielpf7lm9xzo9dh1a0uwe8kavv97flmg0bz48o44pg5aqw5s5sxo6389scd4t7a8vljuv20tnv0ebpdffc6zntbkukajl0d2uj6tkt4q2t8jgbc3br12s3228x7eqme2cp4zbbc1xc2gfcfd9iftrotbiyiaphhc4hb5x7qb97b2dspsq11224kcz5drutnt47e96e1zy76mu2vlqmy9at1yr4cr13nw0cqgr8ag4axrho24qpyunkc0muj04bso9x2in94d1ns5271rb1r5fdhlf3ldo6mkco5b04uxrzsb949mggbtsx8ra6odm6q51z4vso87n3yxcsu2u43la5sbljeniefbahdnqzfle47uqmzd50o9r5n1pomaaq70cjfavxis6k9k37elzw2r95p0amo1rb == \c\f\i\m\r\l\h\x\0\y\l\e\k\w\s\2\j\z\8\8\u\q\l\5\v\1\9\v\t\n\3\l\i\z\9\k\q\4\u\w\m\0\x\4\f\4\6\v\n\2\a\8\x\v\c\9\c\d\w\h\j\z\y\q\s\m\c\3\8\0\g\6\y\3\9\5\t\9\l\l\o\7\p\t\z\x\9\m\p\v\2\j\p\7\t\o\3\y\i\e\l\p\f\7\l\m\9\x\z\o\9\d\h\1\a\0\u\w\e\8\k\a\v\v\9\7\f\l\m\g\0\b\z\4\8\o\4\4\p\g\5\a\q\w\5\s\5\s\x\o\6\3\8\9\s\c\d\4\t\7\a\8\v\l\j\u\v\2\0\t\n\v\0\e\b\p\d\f\f\c\6\z\n\t\b\k\u\k\a\j\l\0\d\2\u\j\6\t\k\t\4\q\2\t\8\j\g\b\c\3\b\r\1\2\s\3\2\2\8\x\7\e\q\m\e\2\c\p\4\z\b\b\c\1\x\c\2\g\f\c\f\d\9\i\f\t\r\o\t\b\i\y\i\a\p\h\h\c\4\h\b\5\x\7\q\b\9\7\b\2\d\s\p\s\q\1\1\2\2\4\k\c\z\5\d\r\u\t\n\t\4\7\e\9\6\e\1\z\y\7\6\m\u\2\v\l\q\m\y\9\a\t\1\y\r\4\c\r\1\3\n\w\0\c\q\g\r\8\a\g\4\a\x\r\h\o\2\4\q\p\y\u\n\k\c\0\m\u\j\0\4\b\s\o\9\x\2\i\n\9\4\d\1\n\s\5\2\7\1\r\b\1\r\5\f\d\h\l\f\3\l\d\o\6\m\k\c\o\5\b\0\4\u\x\r\z\s\b\9\4\9\m\g\g\b\t\s\x\8\r\a\6\o\d\m\6\q\5\1\z\4\v\s\o\8\7\n\3\y\x\c\s\u\2\u\4\3\l\a\5\s\b\l\j\e\n\i\e\f\b\a\h\d\n\q\z\f\l\e\4\7\u\q\m\z\d\5\0\o\9\r\5\n\1\p\o\m\a\a\q\7\0\c\j\f\a\v\x\i\s\6\k\9\k\3\7\e\l\z\w\2\r\9\5\p\0\a\m\o\1\r\b ]] 00:08:49.237 09:10:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.237 09:10:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:49.237 [2024-12-13 09:10:42.970401] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:49.237 [2024-12-13 09:10:42.970615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64139 ] 00:08:49.496 [2024-12-13 09:10:43.151166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.496 [2024-12-13 09:10:43.240657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.755 [2024-12-13 09:10:43.405588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.755  [2024-12-13T09:10:44.582Z] Copying: 512/512 [B] (average 500 kBps) 00:08:50.692 00:08:50.692 09:10:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cfimrlhx0ylekws2jz88uql5v19vtn3liz9kq4uwm0x4f46vn2a8xvc9cdwhjzyqsmc380g6y395t9llo7ptzx9mpv2jp7to3yielpf7lm9xzo9dh1a0uwe8kavv97flmg0bz48o44pg5aqw5s5sxo6389scd4t7a8vljuv20tnv0ebpdffc6zntbkukajl0d2uj6tkt4q2t8jgbc3br12s3228x7eqme2cp4zbbc1xc2gfcfd9iftrotbiyiaphhc4hb5x7qb97b2dspsq11224kcz5drutnt47e96e1zy76mu2vlqmy9at1yr4cr13nw0cqgr8ag4axrho24qpyunkc0muj04bso9x2in94d1ns5271rb1r5fdhlf3ldo6mkco5b04uxrzsb949mggbtsx8ra6odm6q51z4vso87n3yxcsu2u43la5sbljeniefbahdnqzfle47uqmzd50o9r5n1pomaaq70cjfavxis6k9k37elzw2r95p0amo1rb == \c\f\i\m\r\l\h\x\0\y\l\e\k\w\s\2\j\z\8\8\u\q\l\5\v\1\9\v\t\n\3\l\i\z\9\k\q\4\u\w\m\0\x\4\f\4\6\v\n\2\a\8\x\v\c\9\c\d\w\h\j\z\y\q\s\m\c\3\8\0\g\6\y\3\9\5\t\9\l\l\o\7\p\t\z\x\9\m\p\v\2\j\p\7\t\o\3\y\i\e\l\p\f\7\l\m\9\x\z\o\9\d\h\1\a\0\u\w\e\8\k\a\v\v\9\7\f\l\m\g\0\b\z\4\8\o\4\4\p\g\5\a\q\w\5\s\5\s\x\o\6\3\8\9\s\c\d\4\t\7\a\8\v\l\j\u\v\2\0\t\n\v\0\e\b\p\d\f\f\c\6\z\n\t\b\k\u\k\a\j\l\0\d\2\u\j\6\t\k\t\4\q\2\t\8\j\g\b\c\3\b\r\1\2\s\3\2\2\8\x\7\e\q\m\e\2\c\p\4\z\b\b\c\1\x\c\2\g\f\c\f\d\9\i\f\t\r\o\t\b\i\y\i\a\p\h\h\c\4\h\b\5\x\7\q\b\9\7\b\2\d\s\p\s\q\1\1\2\2\4\k\c\z\5\d\r\u\t\n\t\4\7\e\9\6\e\1\z\y\7\6\m\u\2\v\l\q\m\y\9\a\t\1\y\r\4\c\r\1\3\n\w\0\c\q\g\r\8\a\g\4\a\x\r\h\o\2\4\q\p\y\u\n\k\c\0\m\u\j\0\4\b\s\o\9\x\2\i\n\9\4\d\1\n\s\5\2\7\1\r\b\1\r\5\f\d\h\l\f\3\l\d\o\6\m\k\c\o\5\b\0\4\u\x\r\z\s\b\9\4\9\m\g\g\b\t\s\x\8\r\a\6\o\d\m\6\q\5\1\z\4\v\s\o\8\7\n\3\y\x\c\s\u\2\u\4\3\l\a\5\s\b\l\j\e\n\i\e\f\b\a\h\d\n\q\z\f\l\e\4\7\u\q\m\z\d\5\0\o\9\r\5\n\1\p\o\m\a\a\q\7\0\c\j\f\a\v\x\i\s\6\k\9\k\3\7\e\l\z\w\2\r\9\5\p\0\a\m\o\1\r\b ]] 00:08:50.692 09:10:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:50.692 09:10:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:50.692 [2024-12-13 09:10:44.498836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:50.692 [2024-12-13 09:10:44.499358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64158 ] 00:08:50.951 [2024-12-13 09:10:44.679125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.951 [2024-12-13 09:10:44.763577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.210 [2024-12-13 09:10:44.912610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.210  [2024-12-13T09:10:46.035Z] Copying: 512/512 [B] (average 166 kBps) 00:08:52.145 00:08:52.145 09:10:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cfimrlhx0ylekws2jz88uql5v19vtn3liz9kq4uwm0x4f46vn2a8xvc9cdwhjzyqsmc380g6y395t9llo7ptzx9mpv2jp7to3yielpf7lm9xzo9dh1a0uwe8kavv97flmg0bz48o44pg5aqw5s5sxo6389scd4t7a8vljuv20tnv0ebpdffc6zntbkukajl0d2uj6tkt4q2t8jgbc3br12s3228x7eqme2cp4zbbc1xc2gfcfd9iftrotbiyiaphhc4hb5x7qb97b2dspsq11224kcz5drutnt47e96e1zy76mu2vlqmy9at1yr4cr13nw0cqgr8ag4axrho24qpyunkc0muj04bso9x2in94d1ns5271rb1r5fdhlf3ldo6mkco5b04uxrzsb949mggbtsx8ra6odm6q51z4vso87n3yxcsu2u43la5sbljeniefbahdnqzfle47uqmzd50o9r5n1pomaaq70cjfavxis6k9k37elzw2r95p0amo1rb == \c\f\i\m\r\l\h\x\0\y\l\e\k\w\s\2\j\z\8\8\u\q\l\5\v\1\9\v\t\n\3\l\i\z\9\k\q\4\u\w\m\0\x\4\f\4\6\v\n\2\a\8\x\v\c\9\c\d\w\h\j\z\y\q\s\m\c\3\8\0\g\6\y\3\9\5\t\9\l\l\o\7\p\t\z\x\9\m\p\v\2\j\p\7\t\o\3\y\i\e\l\p\f\7\l\m\9\x\z\o\9\d\h\1\a\0\u\w\e\8\k\a\v\v\9\7\f\l\m\g\0\b\z\4\8\o\4\4\p\g\5\a\q\w\5\s\5\s\x\o\6\3\8\9\s\c\d\4\t\7\a\8\v\l\j\u\v\2\0\t\n\v\0\e\b\p\d\f\f\c\6\z\n\t\b\k\u\k\a\j\l\0\d\2\u\j\6\t\k\t\4\q\2\t\8\j\g\b\c\3\b\r\1\2\s\3\2\2\8\x\7\e\q\m\e\2\c\p\4\z\b\b\c\1\x\c\2\g\f\c\f\d\9\i\f\t\r\o\t\b\i\y\i\a\p\h\h\c\4\h\b\5\x\7\q\b\9\7\b\2\d\s\p\s\q\1\1\2\2\4\k\c\z\5\d\r\u\t\n\t\4\7\e\9\6\e\1\z\y\7\6\m\u\2\v\l\q\m\y\9\a\t\1\y\r\4\c\r\1\3\n\w\0\c\q\g\r\8\a\g\4\a\x\r\h\o\2\4\q\p\y\u\n\k\c\0\m\u\j\0\4\b\s\o\9\x\2\i\n\9\4\d\1\n\s\5\2\7\1\r\b\1\r\5\f\d\h\l\f\3\l\d\o\6\m\k\c\o\5\b\0\4\u\x\r\z\s\b\9\4\9\m\g\g\b\t\s\x\8\r\a\6\o\d\m\6\q\5\1\z\4\v\s\o\8\7\n\3\y\x\c\s\u\2\u\4\3\l\a\5\s\b\l\j\e\n\i\e\f\b\a\h\d\n\q\z\f\l\e\4\7\u\q\m\z\d\5\0\o\9\r\5\n\1\p\o\m\a\a\q\7\0\c\j\f\a\v\x\i\s\6\k\9\k\3\7\e\l\z\w\2\r\9\5\p\0\a\m\o\1\r\b ]] 00:08:52.145 09:10:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:52.145 09:10:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:52.146 [2024-12-13 09:10:45.984504] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:52.146 [2024-12-13 09:10:45.984682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64178 ] 00:08:52.404 [2024-12-13 09:10:46.161865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.404 [2024-12-13 09:10:46.243656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.664 [2024-12-13 09:10:46.390299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.664  [2024-12-13T09:10:47.492Z] Copying: 512/512 [B] (average 500 kBps) 00:08:53.602 00:08:53.602 09:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cfimrlhx0ylekws2jz88uql5v19vtn3liz9kq4uwm0x4f46vn2a8xvc9cdwhjzyqsmc380g6y395t9llo7ptzx9mpv2jp7to3yielpf7lm9xzo9dh1a0uwe8kavv97flmg0bz48o44pg5aqw5s5sxo6389scd4t7a8vljuv20tnv0ebpdffc6zntbkukajl0d2uj6tkt4q2t8jgbc3br12s3228x7eqme2cp4zbbc1xc2gfcfd9iftrotbiyiaphhc4hb5x7qb97b2dspsq11224kcz5drutnt47e96e1zy76mu2vlqmy9at1yr4cr13nw0cqgr8ag4axrho24qpyunkc0muj04bso9x2in94d1ns5271rb1r5fdhlf3ldo6mkco5b04uxrzsb949mggbtsx8ra6odm6q51z4vso87n3yxcsu2u43la5sbljeniefbahdnqzfle47uqmzd50o9r5n1pomaaq70cjfavxis6k9k37elzw2r95p0amo1rb == \c\f\i\m\r\l\h\x\0\y\l\e\k\w\s\2\j\z\8\8\u\q\l\5\v\1\9\v\t\n\3\l\i\z\9\k\q\4\u\w\m\0\x\4\f\4\6\v\n\2\a\8\x\v\c\9\c\d\w\h\j\z\y\q\s\m\c\3\8\0\g\6\y\3\9\5\t\9\l\l\o\7\p\t\z\x\9\m\p\v\2\j\p\7\t\o\3\y\i\e\l\p\f\7\l\m\9\x\z\o\9\d\h\1\a\0\u\w\e\8\k\a\v\v\9\7\f\l\m\g\0\b\z\4\8\o\4\4\p\g\5\a\q\w\5\s\5\s\x\o\6\3\8\9\s\c\d\4\t\7\a\8\v\l\j\u\v\2\0\t\n\v\0\e\b\p\d\f\f\c\6\z\n\t\b\k\u\k\a\j\l\0\d\2\u\j\6\t\k\t\4\q\2\t\8\j\g\b\c\3\b\r\1\2\s\3\2\2\8\x\7\e\q\m\e\2\c\p\4\z\b\b\c\1\x\c\2\g\f\c\f\d\9\i\f\t\r\o\t\b\i\y\i\a\p\h\h\c\4\h\b\5\x\7\q\b\9\7\b\2\d\s\p\s\q\1\1\2\2\4\k\c\z\5\d\r\u\t\n\t\4\7\e\9\6\e\1\z\y\7\6\m\u\2\v\l\q\m\y\9\a\t\1\y\r\4\c\r\1\3\n\w\0\c\q\g\r\8\a\g\4\a\x\r\h\o\2\4\q\p\y\u\n\k\c\0\m\u\j\0\4\b\s\o\9\x\2\i\n\9\4\d\1\n\s\5\2\7\1\r\b\1\r\5\f\d\h\l\f\3\l\d\o\6\m\k\c\o\5\b\0\4\u\x\r\z\s\b\9\4\9\m\g\g\b\t\s\x\8\r\a\6\o\d\m\6\q\5\1\z\4\v\s\o\8\7\n\3\y\x\c\s\u\2\u\4\3\l\a\5\s\b\l\j\e\n\i\e\f\b\a\h\d\n\q\z\f\l\e\4\7\u\q\m\z\d\5\0\o\9\r\5\n\1\p\o\m\a\a\q\7\0\c\j\f\a\v\x\i\s\6\k\9\k\3\7\e\l\z\w\2\r\9\5\p\0\a\m\o\1\r\b ]] 00:08:53.602 09:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:53.602 09:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:53.602 09:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:53.602 09:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:53.602 09:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:53.602 09:10:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:53.602 [2024-12-13 09:10:47.448226] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:53.602 [2024-12-13 09:10:47.449009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64197 ] 00:08:53.861 [2024-12-13 09:10:47.633536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.861 [2024-12-13 09:10:47.731377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.120 [2024-12-13 09:10:47.899782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.120  [2024-12-13T09:10:48.947Z] Copying: 512/512 [B] (average 500 kBps) 00:08:55.057 00:08:55.057 09:10:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0grdpr7842v8qx02rmt7ydhzagjryegew6swvcbeuvtcppmpcvewizgb3hgk00lu10ulfqb07pz9n1bf4i1p4bpokhf99g3n765v5tu6m3c7o7zqz4qr6sdp3mqy3o3frv2kvvtv09xjpwpb4o4fybhkaosdfr1ppaix6kf0ebpdts0243iyzflz4qj6neh33hmvl3vgqx70unul4n3pj816rzbvdyg7rxwlz2q981u05151aav4ytqio2fbnt07ltqpqszpiq7hxnc730bssjj4yfj0tiditfmkbn9bji0arn050je0kew8r1p4d19r2q79kwnuc8somsvlqn051qc5y9k39oezn8hp56crs81qtqrugwfoskgng38fb6ewp4yeu93o6jw4dm0t8kmzt04njwcy6hyeeo8mege1yoc5m6zwecrvc4stg7131p5ow95l3gszzt5pc34313wscx1cp9317wgz5y5ahqra9555j7xgqzauvz4ssp57o9md == \0\g\r\d\p\r\7\8\4\2\v\8\q\x\0\2\r\m\t\7\y\d\h\z\a\g\j\r\y\e\g\e\w\6\s\w\v\c\b\e\u\v\t\c\p\p\m\p\c\v\e\w\i\z\g\b\3\h\g\k\0\0\l\u\1\0\u\l\f\q\b\0\7\p\z\9\n\1\b\f\4\i\1\p\4\b\p\o\k\h\f\9\9\g\3\n\7\6\5\v\5\t\u\6\m\3\c\7\o\7\z\q\z\4\q\r\6\s\d\p\3\m\q\y\3\o\3\f\r\v\2\k\v\v\t\v\0\9\x\j\p\w\p\b\4\o\4\f\y\b\h\k\a\o\s\d\f\r\1\p\p\a\i\x\6\k\f\0\e\b\p\d\t\s\0\2\4\3\i\y\z\f\l\z\4\q\j\6\n\e\h\3\3\h\m\v\l\3\v\g\q\x\7\0\u\n\u\l\4\n\3\p\j\8\1\6\r\z\b\v\d\y\g\7\r\x\w\l\z\2\q\9\8\1\u\0\5\1\5\1\a\a\v\4\y\t\q\i\o\2\f\b\n\t\0\7\l\t\q\p\q\s\z\p\i\q\7\h\x\n\c\7\3\0\b\s\s\j\j\4\y\f\j\0\t\i\d\i\t\f\m\k\b\n\9\b\j\i\0\a\r\n\0\5\0\j\e\0\k\e\w\8\r\1\p\4\d\1\9\r\2\q\7\9\k\w\n\u\c\8\s\o\m\s\v\l\q\n\0\5\1\q\c\5\y\9\k\3\9\o\e\z\n\8\h\p\5\6\c\r\s\8\1\q\t\q\r\u\g\w\f\o\s\k\g\n\g\3\8\f\b\6\e\w\p\4\y\e\u\9\3\o\6\j\w\4\d\m\0\t\8\k\m\z\t\0\4\n\j\w\c\y\6\h\y\e\e\o\8\m\e\g\e\1\y\o\c\5\m\6\z\w\e\c\r\v\c\4\s\t\g\7\1\3\1\p\5\o\w\9\5\l\3\g\s\z\z\t\5\p\c\3\4\3\1\3\w\s\c\x\1\c\p\9\3\1\7\w\g\z\5\y\5\a\h\q\r\a\9\5\5\5\j\7\x\g\q\z\a\u\v\z\4\s\s\p\5\7\o\9\m\d ]] 00:08:55.057 09:10:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:55.057 09:10:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:55.314 [2024-12-13 09:10:48.950230] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:55.314 [2024-12-13 09:10:48.950414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64217 ] 00:08:55.314 [2024-12-13 09:10:49.126473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.572 [2024-12-13 09:10:49.213565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.572 [2024-12-13 09:10:49.375217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.830  [2024-12-13T09:10:50.658Z] Copying: 512/512 [B] (average 500 kBps) 00:08:56.768 00:08:56.768 09:10:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0grdpr7842v8qx02rmt7ydhzagjryegew6swvcbeuvtcppmpcvewizgb3hgk00lu10ulfqb07pz9n1bf4i1p4bpokhf99g3n765v5tu6m3c7o7zqz4qr6sdp3mqy3o3frv2kvvtv09xjpwpb4o4fybhkaosdfr1ppaix6kf0ebpdts0243iyzflz4qj6neh33hmvl3vgqx70unul4n3pj816rzbvdyg7rxwlz2q981u05151aav4ytqio2fbnt07ltqpqszpiq7hxnc730bssjj4yfj0tiditfmkbn9bji0arn050je0kew8r1p4d19r2q79kwnuc8somsvlqn051qc5y9k39oezn8hp56crs81qtqrugwfoskgng38fb6ewp4yeu93o6jw4dm0t8kmzt04njwcy6hyeeo8mege1yoc5m6zwecrvc4stg7131p5ow95l3gszzt5pc34313wscx1cp9317wgz5y5ahqra9555j7xgqzauvz4ssp57o9md == \0\g\r\d\p\r\7\8\4\2\v\8\q\x\0\2\r\m\t\7\y\d\h\z\a\g\j\r\y\e\g\e\w\6\s\w\v\c\b\e\u\v\t\c\p\p\m\p\c\v\e\w\i\z\g\b\3\h\g\k\0\0\l\u\1\0\u\l\f\q\b\0\7\p\z\9\n\1\b\f\4\i\1\p\4\b\p\o\k\h\f\9\9\g\3\n\7\6\5\v\5\t\u\6\m\3\c\7\o\7\z\q\z\4\q\r\6\s\d\p\3\m\q\y\3\o\3\f\r\v\2\k\v\v\t\v\0\9\x\j\p\w\p\b\4\o\4\f\y\b\h\k\a\o\s\d\f\r\1\p\p\a\i\x\6\k\f\0\e\b\p\d\t\s\0\2\4\3\i\y\z\f\l\z\4\q\j\6\n\e\h\3\3\h\m\v\l\3\v\g\q\x\7\0\u\n\u\l\4\n\3\p\j\8\1\6\r\z\b\v\d\y\g\7\r\x\w\l\z\2\q\9\8\1\u\0\5\1\5\1\a\a\v\4\y\t\q\i\o\2\f\b\n\t\0\7\l\t\q\p\q\s\z\p\i\q\7\h\x\n\c\7\3\0\b\s\s\j\j\4\y\f\j\0\t\i\d\i\t\f\m\k\b\n\9\b\j\i\0\a\r\n\0\5\0\j\e\0\k\e\w\8\r\1\p\4\d\1\9\r\2\q\7\9\k\w\n\u\c\8\s\o\m\s\v\l\q\n\0\5\1\q\c\5\y\9\k\3\9\o\e\z\n\8\h\p\5\6\c\r\s\8\1\q\t\q\r\u\g\w\f\o\s\k\g\n\g\3\8\f\b\6\e\w\p\4\y\e\u\9\3\o\6\j\w\4\d\m\0\t\8\k\m\z\t\0\4\n\j\w\c\y\6\h\y\e\e\o\8\m\e\g\e\1\y\o\c\5\m\6\z\w\e\c\r\v\c\4\s\t\g\7\1\3\1\p\5\o\w\9\5\l\3\g\s\z\z\t\5\p\c\3\4\3\1\3\w\s\c\x\1\c\p\9\3\1\7\w\g\z\5\y\5\a\h\q\r\a\9\5\5\5\j\7\x\g\q\z\a\u\v\z\4\s\s\p\5\7\o\9\m\d ]] 00:08:56.768 09:10:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:56.768 09:10:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:56.768 [2024-12-13 09:10:50.397809] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:56.768 [2024-12-13 09:10:50.398200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64232 ] 00:08:56.768 [2024-12-13 09:10:50.558252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.768 [2024-12-13 09:10:50.641619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.027 [2024-12-13 09:10:50.794162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.027  [2024-12-13T09:10:51.858Z] Copying: 512/512 [B] (average 125 kBps) 00:08:57.968 00:08:57.968 09:10:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0grdpr7842v8qx02rmt7ydhzagjryegew6swvcbeuvtcppmpcvewizgb3hgk00lu10ulfqb07pz9n1bf4i1p4bpokhf99g3n765v5tu6m3c7o7zqz4qr6sdp3mqy3o3frv2kvvtv09xjpwpb4o4fybhkaosdfr1ppaix6kf0ebpdts0243iyzflz4qj6neh33hmvl3vgqx70unul4n3pj816rzbvdyg7rxwlz2q981u05151aav4ytqio2fbnt07ltqpqszpiq7hxnc730bssjj4yfj0tiditfmkbn9bji0arn050je0kew8r1p4d19r2q79kwnuc8somsvlqn051qc5y9k39oezn8hp56crs81qtqrugwfoskgng38fb6ewp4yeu93o6jw4dm0t8kmzt04njwcy6hyeeo8mege1yoc5m6zwecrvc4stg7131p5ow95l3gszzt5pc34313wscx1cp9317wgz5y5ahqra9555j7xgqzauvz4ssp57o9md == \0\g\r\d\p\r\7\8\4\2\v\8\q\x\0\2\r\m\t\7\y\d\h\z\a\g\j\r\y\e\g\e\w\6\s\w\v\c\b\e\u\v\t\c\p\p\m\p\c\v\e\w\i\z\g\b\3\h\g\k\0\0\l\u\1\0\u\l\f\q\b\0\7\p\z\9\n\1\b\f\4\i\1\p\4\b\p\o\k\h\f\9\9\g\3\n\7\6\5\v\5\t\u\6\m\3\c\7\o\7\z\q\z\4\q\r\6\s\d\p\3\m\q\y\3\o\3\f\r\v\2\k\v\v\t\v\0\9\x\j\p\w\p\b\4\o\4\f\y\b\h\k\a\o\s\d\f\r\1\p\p\a\i\x\6\k\f\0\e\b\p\d\t\s\0\2\4\3\i\y\z\f\l\z\4\q\j\6\n\e\h\3\3\h\m\v\l\3\v\g\q\x\7\0\u\n\u\l\4\n\3\p\j\8\1\6\r\z\b\v\d\y\g\7\r\x\w\l\z\2\q\9\8\1\u\0\5\1\5\1\a\a\v\4\y\t\q\i\o\2\f\b\n\t\0\7\l\t\q\p\q\s\z\p\i\q\7\h\x\n\c\7\3\0\b\s\s\j\j\4\y\f\j\0\t\i\d\i\t\f\m\k\b\n\9\b\j\i\0\a\r\n\0\5\0\j\e\0\k\e\w\8\r\1\p\4\d\1\9\r\2\q\7\9\k\w\n\u\c\8\s\o\m\s\v\l\q\n\0\5\1\q\c\5\y\9\k\3\9\o\e\z\n\8\h\p\5\6\c\r\s\8\1\q\t\q\r\u\g\w\f\o\s\k\g\n\g\3\8\f\b\6\e\w\p\4\y\e\u\9\3\o\6\j\w\4\d\m\0\t\8\k\m\z\t\0\4\n\j\w\c\y\6\h\y\e\e\o\8\m\e\g\e\1\y\o\c\5\m\6\z\w\e\c\r\v\c\4\s\t\g\7\1\3\1\p\5\o\w\9\5\l\3\g\s\z\z\t\5\p\c\3\4\3\1\3\w\s\c\x\1\c\p\9\3\1\7\w\g\z\5\y\5\a\h\q\r\a\9\5\5\5\j\7\x\g\q\z\a\u\v\z\4\s\s\p\5\7\o\9\m\d ]] 00:08:57.968 09:10:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:57.968 09:10:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:58.227 [2024-12-13 09:10:51.878695] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:58.227 [2024-12-13 09:10:51.878878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64256 ] 00:08:58.227 [2024-12-13 09:10:52.055008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.485 [2024-12-13 09:10:52.147484] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.485 [2024-12-13 09:10:52.309482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.744  [2024-12-13T09:10:53.572Z] Copying: 512/512 [B] (average 166 kBps) 00:08:59.682 00:08:59.682 09:10:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0grdpr7842v8qx02rmt7ydhzagjryegew6swvcbeuvtcppmpcvewizgb3hgk00lu10ulfqb07pz9n1bf4i1p4bpokhf99g3n765v5tu6m3c7o7zqz4qr6sdp3mqy3o3frv2kvvtv09xjpwpb4o4fybhkaosdfr1ppaix6kf0ebpdts0243iyzflz4qj6neh33hmvl3vgqx70unul4n3pj816rzbvdyg7rxwlz2q981u05151aav4ytqio2fbnt07ltqpqszpiq7hxnc730bssjj4yfj0tiditfmkbn9bji0arn050je0kew8r1p4d19r2q79kwnuc8somsvlqn051qc5y9k39oezn8hp56crs81qtqrugwfoskgng38fb6ewp4yeu93o6jw4dm0t8kmzt04njwcy6hyeeo8mege1yoc5m6zwecrvc4stg7131p5ow95l3gszzt5pc34313wscx1cp9317wgz5y5ahqra9555j7xgqzauvz4ssp57o9md == \0\g\r\d\p\r\7\8\4\2\v\8\q\x\0\2\r\m\t\7\y\d\h\z\a\g\j\r\y\e\g\e\w\6\s\w\v\c\b\e\u\v\t\c\p\p\m\p\c\v\e\w\i\z\g\b\3\h\g\k\0\0\l\u\1\0\u\l\f\q\b\0\7\p\z\9\n\1\b\f\4\i\1\p\4\b\p\o\k\h\f\9\9\g\3\n\7\6\5\v\5\t\u\6\m\3\c\7\o\7\z\q\z\4\q\r\6\s\d\p\3\m\q\y\3\o\3\f\r\v\2\k\v\v\t\v\0\9\x\j\p\w\p\b\4\o\4\f\y\b\h\k\a\o\s\d\f\r\1\p\p\a\i\x\6\k\f\0\e\b\p\d\t\s\0\2\4\3\i\y\z\f\l\z\4\q\j\6\n\e\h\3\3\h\m\v\l\3\v\g\q\x\7\0\u\n\u\l\4\n\3\p\j\8\1\6\r\z\b\v\d\y\g\7\r\x\w\l\z\2\q\9\8\1\u\0\5\1\5\1\a\a\v\4\y\t\q\i\o\2\f\b\n\t\0\7\l\t\q\p\q\s\z\p\i\q\7\h\x\n\c\7\3\0\b\s\s\j\j\4\y\f\j\0\t\i\d\i\t\f\m\k\b\n\9\b\j\i\0\a\r\n\0\5\0\j\e\0\k\e\w\8\r\1\p\4\d\1\9\r\2\q\7\9\k\w\n\u\c\8\s\o\m\s\v\l\q\n\0\5\1\q\c\5\y\9\k\3\9\o\e\z\n\8\h\p\5\6\c\r\s\8\1\q\t\q\r\u\g\w\f\o\s\k\g\n\g\3\8\f\b\6\e\w\p\4\y\e\u\9\3\o\6\j\w\4\d\m\0\t\8\k\m\z\t\0\4\n\j\w\c\y\6\h\y\e\e\o\8\m\e\g\e\1\y\o\c\5\m\6\z\w\e\c\r\v\c\4\s\t\g\7\1\3\1\p\5\o\w\9\5\l\3\g\s\z\z\t\5\p\c\3\4\3\1\3\w\s\c\x\1\c\p\9\3\1\7\w\g\z\5\y\5\a\h\q\r\a\9\5\5\5\j\7\x\g\q\z\a\u\v\z\4\s\s\p\5\7\o\9\m\d ]] 00:08:59.682 00:08:59.682 real 0m11.952s 00:08:59.682 user 0m9.444s 00:08:59.682 sys 0m1.517s 00:08:59.682 09:10:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.682 ************************************ 00:08:59.682 END TEST dd_flags_misc_forced_aio 00:08:59.682 ************************************ 00:08:59.682 09:10:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:59.682 09:10:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:59.682 09:10:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:59.682 09:10:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:59.682 ************************************ 00:08:59.682 END TEST spdk_dd_posix 00:08:59.682 ************************************ 00:08:59.682 00:08:59.682 real 0m51.047s 00:08:59.682 user 0m38.753s 00:08:59.682 sys 0m14.393s 00:08:59.682 09:10:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.682 09:10:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:59.682 09:10:53 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:59.682 09:10:53 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.682 09:10:53 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.682 09:10:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:59.682 ************************************ 00:08:59.682 START TEST spdk_dd_malloc 00:08:59.682 ************************************ 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:59.682 * Looking for test storage... 00:08:59.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.682 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.683 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.683 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:59.683 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:59.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.942 --rc genhtml_branch_coverage=1 00:08:59.942 --rc genhtml_function_coverage=1 00:08:59.942 --rc genhtml_legend=1 00:08:59.942 --rc geninfo_all_blocks=1 00:08:59.942 --rc geninfo_unexecuted_blocks=1 00:08:59.942 00:08:59.942 ' 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:59.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.942 --rc genhtml_branch_coverage=1 00:08:59.942 --rc genhtml_function_coverage=1 00:08:59.942 --rc genhtml_legend=1 00:08:59.942 --rc geninfo_all_blocks=1 00:08:59.942 --rc geninfo_unexecuted_blocks=1 00:08:59.942 00:08:59.942 ' 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:59.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.942 --rc genhtml_branch_coverage=1 00:08:59.942 --rc genhtml_function_coverage=1 00:08:59.942 --rc genhtml_legend=1 00:08:59.942 --rc geninfo_all_blocks=1 00:08:59.942 --rc geninfo_unexecuted_blocks=1 00:08:59.942 00:08:59.942 ' 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:59.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.942 --rc genhtml_branch_coverage=1 00:08:59.942 --rc genhtml_function_coverage=1 00:08:59.942 --rc genhtml_legend=1 00:08:59.942 --rc geninfo_all_blocks=1 00:08:59.942 --rc geninfo_unexecuted_blocks=1 00:08:59.942 00:08:59.942 ' 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:59.942 ************************************ 00:08:59.942 START TEST dd_malloc_copy 00:08:59.942 ************************************ 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:59.942 09:10:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:59.942 { 00:08:59.942 "subsystems": [ 00:08:59.942 { 00:08:59.942 "subsystem": "bdev", 00:08:59.942 "config": [ 00:08:59.942 { 00:08:59.942 "params": { 00:08:59.942 "block_size": 512, 00:08:59.942 "num_blocks": 1048576, 00:08:59.942 "name": "malloc0" 00:08:59.942 }, 00:08:59.942 "method": "bdev_malloc_create" 00:08:59.942 }, 00:08:59.942 { 00:08:59.942 "params": { 00:08:59.942 "block_size": 512, 00:08:59.942 "num_blocks": 1048576, 00:08:59.942 "name": "malloc1" 00:08:59.942 }, 00:08:59.942 "method": "bdev_malloc_create" 00:08:59.942 }, 00:08:59.942 { 00:08:59.942 "method": "bdev_wait_for_examine" 00:08:59.942 } 00:08:59.942 ] 00:08:59.942 } 00:08:59.942 ] 00:08:59.942 } 00:08:59.942 [2024-12-13 09:10:53.703161] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:59.942 [2024-12-13 09:10:53.703597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64344 ] 00:09:00.202 [2024-12-13 09:10:53.880165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.202 [2024-12-13 09:10:53.968624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.461 [2024-12-13 09:10:54.126387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.367  [2024-12-13T09:10:57.194Z] Copying: 183/512 [MB] (183 MBps) [2024-12-13T09:10:58.131Z] Copying: 367/512 [MB] (184 MBps) [2024-12-13T09:11:00.664Z] Copying: 512/512 [MB] (average 183 MBps) 00:09:06.774 00:09:06.774 09:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:06.774 09:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:06.774 09:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:06.774 09:11:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:07.034 { 00:09:07.034 "subsystems": [ 00:09:07.034 { 00:09:07.034 "subsystem": "bdev", 00:09:07.034 "config": [ 00:09:07.034 { 00:09:07.034 "params": { 00:09:07.034 "block_size": 512, 00:09:07.034 "num_blocks": 1048576, 00:09:07.034 "name": "malloc0" 00:09:07.034 }, 00:09:07.034 "method": "bdev_malloc_create" 00:09:07.034 }, 00:09:07.034 { 00:09:07.034 "params": { 00:09:07.034 "block_size": 512, 00:09:07.034 "num_blocks": 1048576, 00:09:07.034 "name": "malloc1" 00:09:07.034 }, 00:09:07.034 "method": "bdev_malloc_create" 00:09:07.034 }, 00:09:07.034 { 00:09:07.034 "method": "bdev_wait_for_examine" 00:09:07.034 } 00:09:07.034 ] 00:09:07.034 } 00:09:07.034 ] 00:09:07.034 } 00:09:07.034 [2024-12-13 09:11:00.772774] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:07.034 [2024-12-13 09:11:00.773175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64427 ] 00:09:07.293 [2024-12-13 09:11:00.955965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.293 [2024-12-13 09:11:01.051548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.552 [2024-12-13 09:11:01.206239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.458  [2024-12-13T09:11:04.299Z] Copying: 191/512 [MB] (191 MBps) [2024-12-13T09:11:04.867Z] Copying: 375/512 [MB] (183 MBps) [2024-12-13T09:11:08.156Z] Copying: 512/512 [MB] (average 189 MBps) 00:09:14.266 00:09:14.266 00:09:14.266 real 0m14.075s 00:09:14.266 user 0m13.072s 00:09:14.266 sys 0m0.820s 00:09:14.266 09:11:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.266 09:11:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:14.266 ************************************ 00:09:14.266 END TEST dd_malloc_copy 00:09:14.266 ************************************ 00:09:14.266 ************************************ 00:09:14.266 END TEST spdk_dd_malloc 00:09:14.266 ************************************ 00:09:14.266 00:09:14.266 real 0m14.325s 00:09:14.266 user 0m13.214s 00:09:14.266 sys 0m0.922s 00:09:14.266 09:11:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.266 09:11:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:14.266 09:11:07 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:14.266 09:11:07 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:14.266 09:11:07 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.266 09:11:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:14.266 ************************************ 00:09:14.266 START TEST spdk_dd_bdev_to_bdev 00:09:14.266 ************************************ 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:14.267 * Looking for test storage... 00:09:14.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.267 --rc genhtml_branch_coverage=1 00:09:14.267 --rc genhtml_function_coverage=1 00:09:14.267 --rc genhtml_legend=1 00:09:14.267 --rc geninfo_all_blocks=1 00:09:14.267 --rc geninfo_unexecuted_blocks=1 00:09:14.267 00:09:14.267 ' 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.267 --rc genhtml_branch_coverage=1 00:09:14.267 --rc genhtml_function_coverage=1 00:09:14.267 --rc genhtml_legend=1 00:09:14.267 --rc geninfo_all_blocks=1 00:09:14.267 --rc geninfo_unexecuted_blocks=1 00:09:14.267 00:09:14.267 ' 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.267 --rc genhtml_branch_coverage=1 00:09:14.267 --rc genhtml_function_coverage=1 00:09:14.267 --rc genhtml_legend=1 00:09:14.267 --rc geninfo_all_blocks=1 00:09:14.267 --rc geninfo_unexecuted_blocks=1 00:09:14.267 00:09:14.267 ' 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.267 --rc genhtml_branch_coverage=1 00:09:14.267 --rc genhtml_function_coverage=1 00:09:14.267 --rc genhtml_legend=1 00:09:14.267 --rc geninfo_all_blocks=1 00:09:14.267 --rc geninfo_unexecuted_blocks=1 00:09:14.267 00:09:14.267 ' 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:14.267 ************************************ 00:09:14.267 START TEST dd_inflate_file 00:09:14.267 ************************************ 00:09:14.267 09:11:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:14.267 [2024-12-13 09:11:08.069727] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:14.267 [2024-12-13 09:11:08.070175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64585 ] 00:09:14.527 [2024-12-13 09:11:08.249873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.527 [2024-12-13 09:11:08.333225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.785 [2024-12-13 09:11:08.494939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.785  [2024-12-13T09:11:09.611Z] Copying: 64/64 [MB] (average 1684 MBps) 00:09:15.721 00:09:15.721 00:09:15.721 real 0m1.523s 00:09:15.721 user 0m1.226s 00:09:15.721 sys 0m0.877s 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:15.721 ************************************ 00:09:15.721 END TEST dd_inflate_file 00:09:15.721 ************************************ 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:15.721 ************************************ 00:09:15.721 START TEST dd_copy_to_out_bdev 00:09:15.721 ************************************ 00:09:15.721 09:11:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:15.721 { 00:09:15.721 "subsystems": [ 00:09:15.721 { 00:09:15.721 "subsystem": "bdev", 00:09:15.721 "config": [ 00:09:15.721 { 00:09:15.721 "params": { 00:09:15.721 "trtype": "pcie", 00:09:15.721 "traddr": "0000:00:10.0", 00:09:15.721 "name": "Nvme0" 00:09:15.721 }, 00:09:15.721 "method": "bdev_nvme_attach_controller" 00:09:15.721 }, 00:09:15.721 { 00:09:15.721 "params": { 00:09:15.721 "trtype": "pcie", 00:09:15.721 "traddr": "0000:00:11.0", 00:09:15.721 "name": "Nvme1" 00:09:15.721 }, 00:09:15.721 "method": "bdev_nvme_attach_controller" 00:09:15.721 }, 00:09:15.721 { 00:09:15.721 "method": "bdev_wait_for_examine" 00:09:15.721 } 00:09:15.721 ] 00:09:15.721 } 00:09:15.721 ] 00:09:15.721 } 00:09:15.980 [2024-12-13 09:11:09.649933] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:15.980 [2024-12-13 09:11:09.650115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64629 ] 00:09:15.980 [2024-12-13 09:11:09.829439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.239 [2024-12-13 09:11:09.914328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.239 [2024-12-13 09:11:10.072073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:17.616  [2024-12-13T09:11:11.765Z] Copying: 45/64 [MB] (45 MBps) [2024-12-13T09:11:12.703Z] Copying: 64/64 [MB] (average 45 MBps) 00:09:18.813 00:09:18.813 00:09:18.813 real 0m3.068s 00:09:18.813 user 0m2.760s 00:09:18.813 sys 0m2.315s 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:18.813 ************************************ 00:09:18.813 END TEST dd_copy_to_out_bdev 00:09:18.813 ************************************ 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:18.813 ************************************ 00:09:18.813 START TEST dd_offset_magic 00:09:18.813 ************************************ 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:18.813 09:11:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:19.073 { 00:09:19.073 "subsystems": [ 00:09:19.073 { 00:09:19.073 "subsystem": "bdev", 00:09:19.073 "config": [ 00:09:19.073 { 00:09:19.073 "params": { 00:09:19.073 "trtype": "pcie", 00:09:19.073 "traddr": "0000:00:10.0", 00:09:19.073 "name": "Nvme0" 00:09:19.073 }, 00:09:19.073 "method": "bdev_nvme_attach_controller" 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "params": { 00:09:19.073 "trtype": "pcie", 00:09:19.073 "traddr": "0000:00:11.0", 00:09:19.073 "name": "Nvme1" 00:09:19.073 }, 00:09:19.073 "method": "bdev_nvme_attach_controller" 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "method": "bdev_wait_for_examine" 00:09:19.073 } 00:09:19.073 ] 00:09:19.073 } 00:09:19.073 ] 00:09:19.073 } 00:09:19.073 [2024-12-13 09:11:12.762429] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:19.073 [2024-12-13 09:11:12.762600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64682 ] 00:09:19.073 [2024-12-13 09:11:12.928998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.332 [2024-12-13 09:11:13.013027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.332 [2024-12-13 09:11:13.164367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.591  [2024-12-13T09:11:14.419Z] Copying: 65/65 [MB] (average 942 MBps) 00:09:20.529 00:09:20.529 09:11:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:20.529 09:11:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:20.529 09:11:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:20.529 09:11:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:20.529 { 00:09:20.529 "subsystems": [ 00:09:20.529 { 00:09:20.529 "subsystem": "bdev", 00:09:20.529 "config": [ 00:09:20.529 { 00:09:20.529 "params": { 00:09:20.529 "trtype": "pcie", 00:09:20.529 "traddr": "0000:00:10.0", 00:09:20.529 "name": "Nvme0" 00:09:20.529 }, 00:09:20.529 "method": "bdev_nvme_attach_controller" 00:09:20.529 }, 00:09:20.529 { 00:09:20.529 "params": { 00:09:20.529 "trtype": "pcie", 00:09:20.529 "traddr": "0000:00:11.0", 00:09:20.529 "name": "Nvme1" 00:09:20.529 }, 00:09:20.529 "method": "bdev_nvme_attach_controller" 00:09:20.529 }, 00:09:20.529 { 00:09:20.529 "method": "bdev_wait_for_examine" 00:09:20.529 } 00:09:20.529 ] 00:09:20.529 } 00:09:20.529 ] 00:09:20.529 } 00:09:20.788 [2024-12-13 09:11:14.457420] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:20.788 [2024-12-13 09:11:14.457614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64714 ] 00:09:20.788 [2024-12-13 09:11:14.638525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.046 [2024-12-13 09:11:14.737010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.046 [2024-12-13 09:11:14.909143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.304  [2024-12-13T09:11:16.149Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:22.259 00:09:22.259 09:11:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:22.259 09:11:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:22.259 09:11:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:22.259 09:11:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:22.259 09:11:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:22.259 09:11:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:22.259 09:11:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:22.259 { 00:09:22.259 "subsystems": [ 00:09:22.259 { 00:09:22.259 "subsystem": "bdev", 00:09:22.259 "config": [ 00:09:22.259 { 00:09:22.259 "params": { 00:09:22.259 "trtype": "pcie", 00:09:22.259 "traddr": "0000:00:10.0", 00:09:22.259 "name": "Nvme0" 00:09:22.259 }, 00:09:22.259 "method": "bdev_nvme_attach_controller" 00:09:22.259 }, 00:09:22.259 { 00:09:22.259 "params": { 00:09:22.259 "trtype": "pcie", 00:09:22.259 "traddr": "0000:00:11.0", 00:09:22.259 "name": "Nvme1" 00:09:22.259 }, 00:09:22.259 "method": "bdev_nvme_attach_controller" 00:09:22.259 }, 00:09:22.259 { 00:09:22.259 "method": "bdev_wait_for_examine" 00:09:22.259 } 00:09:22.259 ] 00:09:22.259 } 00:09:22.259 ] 00:09:22.259 } 00:09:22.518 [2024-12-13 09:11:16.179382] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:22.518 [2024-12-13 09:11:16.179543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64742 ] 00:09:22.518 [2024-12-13 09:11:16.361278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.777 [2024-12-13 09:11:16.454542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.777 [2024-12-13 09:11:16.612634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:23.036  [2024-12-13T09:11:17.864Z] Copying: 65/65 [MB] (average 1160 MBps) 00:09:23.974 00:09:23.974 09:11:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:23.974 09:11:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:23.974 09:11:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:23.974 09:11:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:23.974 { 00:09:23.974 "subsystems": [ 00:09:23.974 { 00:09:23.974 "subsystem": "bdev", 00:09:23.974 "config": [ 00:09:23.974 { 00:09:23.974 "params": { 00:09:23.974 "trtype": "pcie", 00:09:23.974 "traddr": "0000:00:10.0", 00:09:23.974 "name": "Nvme0" 00:09:23.974 }, 00:09:23.974 "method": "bdev_nvme_attach_controller" 00:09:23.974 }, 00:09:23.974 { 00:09:23.974 "params": { 00:09:23.974 "trtype": "pcie", 00:09:23.974 "traddr": "0000:00:11.0", 00:09:23.974 "name": "Nvme1" 00:09:23.974 }, 00:09:23.974 "method": "bdev_nvme_attach_controller" 00:09:23.974 }, 00:09:23.974 { 00:09:23.974 "method": "bdev_wait_for_examine" 00:09:23.974 } 00:09:23.974 ] 00:09:23.974 } 00:09:23.974 ] 00:09:23.974 } 00:09:23.974 [2024-12-13 09:11:17.760092] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:23.974 [2024-12-13 09:11:17.760400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64769 ] 00:09:24.233 [2024-12-13 09:11:17.927336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.233 [2024-12-13 09:11:18.017096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.492 [2024-12-13 09:11:18.192718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.751  [2024-12-13T09:11:19.578Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:25.689 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:25.689 00:09:25.689 real 0m6.716s 00:09:25.689 user 0m5.693s 00:09:25.689 sys 0m2.209s 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.689 ************************************ 00:09:25.689 END TEST dd_offset_magic 00:09:25.689 ************************************ 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:25.689 09:11:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:25.689 { 00:09:25.689 "subsystems": [ 00:09:25.689 { 00:09:25.689 "subsystem": "bdev", 00:09:25.689 "config": [ 00:09:25.689 { 00:09:25.689 "params": { 00:09:25.689 "trtype": "pcie", 00:09:25.689 "traddr": "0000:00:10.0", 00:09:25.689 "name": "Nvme0" 00:09:25.689 }, 00:09:25.689 "method": "bdev_nvme_attach_controller" 00:09:25.689 }, 00:09:25.689 { 00:09:25.689 "params": { 00:09:25.689 "trtype": "pcie", 00:09:25.689 "traddr": "0000:00:11.0", 00:09:25.689 "name": "Nvme1" 00:09:25.689 }, 00:09:25.689 "method": "bdev_nvme_attach_controller" 00:09:25.689 }, 00:09:25.689 { 00:09:25.689 "method": "bdev_wait_for_examine" 00:09:25.689 } 00:09:25.689 ] 00:09:25.689 } 00:09:25.689 ] 00:09:25.689 } 00:09:25.689 [2024-12-13 09:11:19.542221] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:25.689 [2024-12-13 09:11:19.542460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64819 ] 00:09:25.948 [2024-12-13 09:11:19.724058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.948 [2024-12-13 09:11:19.821539] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.207 [2024-12-13 09:11:19.989920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.466  [2024-12-13T09:11:21.293Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:09:27.403 00:09:27.403 09:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:27.403 09:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:27.403 09:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:27.403 09:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:27.403 09:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:27.403 09:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:27.403 09:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:27.403 09:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:27.403 09:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:27.403 09:11:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:27.403 { 00:09:27.403 "subsystems": [ 00:09:27.403 { 00:09:27.403 "subsystem": "bdev", 00:09:27.403 "config": [ 00:09:27.403 { 00:09:27.403 "params": { 00:09:27.403 "trtype": "pcie", 00:09:27.403 "traddr": "0000:00:10.0", 00:09:27.403 "name": "Nvme0" 00:09:27.403 }, 00:09:27.403 "method": "bdev_nvme_attach_controller" 00:09:27.403 }, 00:09:27.403 { 00:09:27.403 "params": { 00:09:27.403 "trtype": "pcie", 00:09:27.403 "traddr": "0000:00:11.0", 00:09:27.403 "name": "Nvme1" 00:09:27.403 }, 00:09:27.403 "method": "bdev_nvme_attach_controller" 00:09:27.403 }, 00:09:27.403 { 00:09:27.403 "method": "bdev_wait_for_examine" 00:09:27.403 } 00:09:27.403 ] 00:09:27.403 } 00:09:27.403 ] 00:09:27.403 } 00:09:27.403 [2024-12-13 09:11:21.144301] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:27.403 [2024-12-13 09:11:21.144472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64841 ] 00:09:27.662 [2024-12-13 09:11:21.325654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.662 [2024-12-13 09:11:21.428897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.921 [2024-12-13 09:11:21.600134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:28.180  [2024-12-13T09:11:23.008Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:09:29.118 00:09:29.118 09:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:29.118 ************************************ 00:09:29.118 END TEST spdk_dd_bdev_to_bdev 00:09:29.118 ************************************ 00:09:29.118 00:09:29.118 real 0m15.077s 00:09:29.118 user 0m12.704s 00:09:29.118 sys 0m7.277s 00:09:29.118 09:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.118 09:11:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:29.118 09:11:22 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:29.118 09:11:22 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:29.118 09:11:22 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.118 09:11:22 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.118 09:11:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:29.118 ************************************ 00:09:29.118 START TEST spdk_dd_uring 00:09:29.118 ************************************ 00:09:29.118 09:11:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:29.118 * Looking for test storage... 00:09:29.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:29.118 09:11:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.118 09:11:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.118 09:11:22 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.378 --rc genhtml_branch_coverage=1 00:09:29.378 --rc genhtml_function_coverage=1 00:09:29.378 --rc genhtml_legend=1 00:09:29.378 --rc geninfo_all_blocks=1 00:09:29.378 --rc geninfo_unexecuted_blocks=1 00:09:29.378 00:09:29.378 ' 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.378 --rc genhtml_branch_coverage=1 00:09:29.378 --rc genhtml_function_coverage=1 00:09:29.378 --rc genhtml_legend=1 00:09:29.378 --rc geninfo_all_blocks=1 00:09:29.378 --rc geninfo_unexecuted_blocks=1 00:09:29.378 00:09:29.378 ' 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.378 --rc genhtml_branch_coverage=1 00:09:29.378 --rc genhtml_function_coverage=1 00:09:29.378 --rc genhtml_legend=1 00:09:29.378 --rc geninfo_all_blocks=1 00:09:29.378 --rc geninfo_unexecuted_blocks=1 00:09:29.378 00:09:29.378 ' 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.378 --rc genhtml_branch_coverage=1 00:09:29.378 --rc genhtml_function_coverage=1 00:09:29.378 --rc genhtml_legend=1 00:09:29.378 --rc geninfo_all_blocks=1 00:09:29.378 --rc geninfo_unexecuted_blocks=1 00:09:29.378 00:09:29.378 ' 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.378 09:11:23 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:29.379 ************************************ 00:09:29.379 START TEST dd_uring_copy 00:09:29.379 ************************************ 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=lljftf5l015g8ttyejxvx2v1b2fu1uutcddnpbuwpqjgaxe3pdtg133250jwrsn56f8l2he0aga4m6uxo9cfgfeyksk9o1ulgpu4lc6q2ur3pln6lzqp1witkjrvqb8md61btnotim1dae0ykfp9fug6ei7p55lbtm55yitj5ojl1wuxvt63iixgbkm0eudqdx2kkyy19alz4muzjgxwvho0lv802s2pw3naa4kv8jaa6m18iyb7ugf2z4rxah05zmu3695tlnetypzkzdsyrf7eq923o4trb264rdn3a0ueoo9bztrwxmf2lwnhpgb36sl8ojoqp8wbvan7nxh3o24conz7gemnov7uft8csr90zja9jca43r83jbkujd5h5e45nxjuoq29xkigj24ttv3t250vhhz7w4zd6bes8wemzmma2jpbpoz9j566qdb3u0pu00furgzkhgfbochoi94v2qu5k0qvb0vwvzlfzfmdbbglyf6fgfszf7u93zpgy31bxw114dp7apus0dfbrj85fvp79uaib74m392z4pebrr1kp10tzbvic0ivetjam2v119bk0x4i7uykmhlb6iwcmnh8dt4gu2lo4fmrruhn5w53jqcteo5i764wkuolxzpd9d6qbxvalxjmatm2nzwf067is9eorsx4ziidcuc7tsktvbu24myb8yuqyp1oxcp0mryi7mfaxofsmpxayvvfqc9ifishxl5dkzwvqxhim3epn4qkffvjknk2aknecsym9mp5oums5vtiavgvev7op5z2g3pnon8kl793jlihr5z4o5trvnjxdzb4yf8h3zush8995cw9u9r8qmtulvgia20l0rjp46b8uheynmvngf1udsxrv5nzcoas5wzeqms0xcue3w1jf2qgvlvq74sn7f9snv9y7vl6tp74xboqne1c6v8csd8jt9aie9ft1ru2v17jaxh71dhmqarmuly7h7n7fqnbhfxtiqjtzfo6kw0jsge7bgd95vdyq8me 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo lljftf5l015g8ttyejxvx2v1b2fu1uutcddnpbuwpqjgaxe3pdtg133250jwrsn56f8l2he0aga4m6uxo9cfgfeyksk9o1ulgpu4lc6q2ur3pln6lzqp1witkjrvqb8md61btnotim1dae0ykfp9fug6ei7p55lbtm55yitj5ojl1wuxvt63iixgbkm0eudqdx2kkyy19alz4muzjgxwvho0lv802s2pw3naa4kv8jaa6m18iyb7ugf2z4rxah05zmu3695tlnetypzkzdsyrf7eq923o4trb264rdn3a0ueoo9bztrwxmf2lwnhpgb36sl8ojoqp8wbvan7nxh3o24conz7gemnov7uft8csr90zja9jca43r83jbkujd5h5e45nxjuoq29xkigj24ttv3t250vhhz7w4zd6bes8wemzmma2jpbpoz9j566qdb3u0pu00furgzkhgfbochoi94v2qu5k0qvb0vwvzlfzfmdbbglyf6fgfszf7u93zpgy31bxw114dp7apus0dfbrj85fvp79uaib74m392z4pebrr1kp10tzbvic0ivetjam2v119bk0x4i7uykmhlb6iwcmnh8dt4gu2lo4fmrruhn5w53jqcteo5i764wkuolxzpd9d6qbxvalxjmatm2nzwf067is9eorsx4ziidcuc7tsktvbu24myb8yuqyp1oxcp0mryi7mfaxofsmpxayvvfqc9ifishxl5dkzwvqxhim3epn4qkffvjknk2aknecsym9mp5oums5vtiavgvev7op5z2g3pnon8kl793jlihr5z4o5trvnjxdzb4yf8h3zush8995cw9u9r8qmtulvgia20l0rjp46b8uheynmvngf1udsxrv5nzcoas5wzeqms0xcue3w1jf2qgvlvq74sn7f9snv9y7vl6tp74xboqne1c6v8csd8jt9aie9ft1ru2v17jaxh71dhmqarmuly7h7n7fqnbhfxtiqjtzfo6kw0jsge7bgd95vdyq8me 00:09:29.379 09:11:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:29.379 [2024-12-13 09:11:23.208443] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:29.379 [2024-12-13 09:11:23.208601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64931 ] 00:09:29.638 [2024-12-13 09:11:23.373867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.638 [2024-12-13 09:11:23.468610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.898 [2024-12-13 09:11:23.636363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.905  [2024-12-13T09:11:26.698Z] Copying: 511/511 [MB] (average 1216 MBps) 00:09:32.808 00:09:32.808 09:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:32.808 09:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:32.808 09:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:32.808 09:11:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:32.808 { 00:09:32.808 "subsystems": [ 00:09:32.808 { 00:09:32.808 "subsystem": "bdev", 00:09:32.808 "config": [ 00:09:32.808 { 00:09:32.808 "params": { 00:09:32.808 "block_size": 512, 00:09:32.808 "num_blocks": 1048576, 00:09:32.808 "name": "malloc0" 00:09:32.808 }, 00:09:32.808 "method": "bdev_malloc_create" 00:09:32.808 }, 00:09:32.808 { 00:09:32.808 "params": { 00:09:32.808 "filename": "/dev/zram1", 00:09:32.808 "name": "uring0" 00:09:32.808 }, 00:09:32.808 "method": "bdev_uring_create" 00:09:32.808 }, 00:09:32.808 { 00:09:32.808 "method": "bdev_wait_for_examine" 00:09:32.808 } 00:09:32.808 ] 00:09:32.808 } 00:09:32.808 ] 00:09:32.808 } 00:09:32.808 [2024-12-13 09:11:26.568897] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:32.808 [2024-12-13 09:11:26.569069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64976 ] 00:09:33.066 [2024-12-13 09:11:26.739993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.066 [2024-12-13 09:11:26.831555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.326 [2024-12-13 09:11:26.993510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.715  [2024-12-13T09:11:29.542Z] Copying: 214/512 [MB] (214 MBps) [2024-12-13T09:11:30.110Z] Copying: 428/512 [MB] (213 MBps) [2024-12-13T09:11:32.015Z] Copying: 512/512 [MB] (average 211 MBps) 00:09:38.125 00:09:38.125 09:11:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:38.125 09:11:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:38.125 09:11:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:38.126 09:11:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:38.126 { 00:09:38.126 "subsystems": [ 00:09:38.126 { 00:09:38.126 "subsystem": "bdev", 00:09:38.126 "config": [ 00:09:38.126 { 00:09:38.126 "params": { 00:09:38.126 "block_size": 512, 00:09:38.126 "num_blocks": 1048576, 00:09:38.126 "name": "malloc0" 00:09:38.126 }, 00:09:38.126 "method": "bdev_malloc_create" 00:09:38.126 }, 00:09:38.126 { 00:09:38.126 "params": { 00:09:38.126 "filename": "/dev/zram1", 00:09:38.126 "name": "uring0" 00:09:38.126 }, 00:09:38.126 "method": "bdev_uring_create" 00:09:38.126 }, 00:09:38.126 { 00:09:38.126 "method": "bdev_wait_for_examine" 00:09:38.126 } 00:09:38.126 ] 00:09:38.126 } 00:09:38.126 ] 00:09:38.126 } 00:09:38.126 [2024-12-13 09:11:31.925220] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:38.126 [2024-12-13 09:11:31.925448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65043 ] 00:09:38.384 [2024-12-13 09:11:32.101895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.384 [2024-12-13 09:11:32.184042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.643 [2024-12-13 09:11:32.341944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.021  [2024-12-13T09:11:34.848Z] Copying: 129/512 [MB] (129 MBps) [2024-12-13T09:11:36.223Z] Copying: 263/512 [MB] (134 MBps) [2024-12-13T09:11:36.790Z] Copying: 400/512 [MB] (136 MBps) [2024-12-13T09:11:38.693Z] Copying: 512/512 [MB] (average 131 MBps) 00:09:44.803 00:09:44.803 09:11:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:44.804 09:11:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ lljftf5l015g8ttyejxvx2v1b2fu1uutcddnpbuwpqjgaxe3pdtg133250jwrsn56f8l2he0aga4m6uxo9cfgfeyksk9o1ulgpu4lc6q2ur3pln6lzqp1witkjrvqb8md61btnotim1dae0ykfp9fug6ei7p55lbtm55yitj5ojl1wuxvt63iixgbkm0eudqdx2kkyy19alz4muzjgxwvho0lv802s2pw3naa4kv8jaa6m18iyb7ugf2z4rxah05zmu3695tlnetypzkzdsyrf7eq923o4trb264rdn3a0ueoo9bztrwxmf2lwnhpgb36sl8ojoqp8wbvan7nxh3o24conz7gemnov7uft8csr90zja9jca43r83jbkujd5h5e45nxjuoq29xkigj24ttv3t250vhhz7w4zd6bes8wemzmma2jpbpoz9j566qdb3u0pu00furgzkhgfbochoi94v2qu5k0qvb0vwvzlfzfmdbbglyf6fgfszf7u93zpgy31bxw114dp7apus0dfbrj85fvp79uaib74m392z4pebrr1kp10tzbvic0ivetjam2v119bk0x4i7uykmhlb6iwcmnh8dt4gu2lo4fmrruhn5w53jqcteo5i764wkuolxzpd9d6qbxvalxjmatm2nzwf067is9eorsx4ziidcuc7tsktvbu24myb8yuqyp1oxcp0mryi7mfaxofsmpxayvvfqc9ifishxl5dkzwvqxhim3epn4qkffvjknk2aknecsym9mp5oums5vtiavgvev7op5z2g3pnon8kl793jlihr5z4o5trvnjxdzb4yf8h3zush8995cw9u9r8qmtulvgia20l0rjp46b8uheynmvngf1udsxrv5nzcoas5wzeqms0xcue3w1jf2qgvlvq74sn7f9snv9y7vl6tp74xboqne1c6v8csd8jt9aie9ft1ru2v17jaxh71dhmqarmuly7h7n7fqnbhfxtiqjtzfo6kw0jsge7bgd95vdyq8me == \l\l\j\f\t\f\5\l\0\1\5\g\8\t\t\y\e\j\x\v\x\2\v\1\b\2\f\u\1\u\u\t\c\d\d\n\p\b\u\w\p\q\j\g\a\x\e\3\p\d\t\g\1\3\3\2\5\0\j\w\r\s\n\5\6\f\8\l\2\h\e\0\a\g\a\4\m\6\u\x\o\9\c\f\g\f\e\y\k\s\k\9\o\1\u\l\g\p\u\4\l\c\6\q\2\u\r\3\p\l\n\6\l\z\q\p\1\w\i\t\k\j\r\v\q\b\8\m\d\6\1\b\t\n\o\t\i\m\1\d\a\e\0\y\k\f\p\9\f\u\g\6\e\i\7\p\5\5\l\b\t\m\5\5\y\i\t\j\5\o\j\l\1\w\u\x\v\t\6\3\i\i\x\g\b\k\m\0\e\u\d\q\d\x\2\k\k\y\y\1\9\a\l\z\4\m\u\z\j\g\x\w\v\h\o\0\l\v\8\0\2\s\2\p\w\3\n\a\a\4\k\v\8\j\a\a\6\m\1\8\i\y\b\7\u\g\f\2\z\4\r\x\a\h\0\5\z\m\u\3\6\9\5\t\l\n\e\t\y\p\z\k\z\d\s\y\r\f\7\e\q\9\2\3\o\4\t\r\b\2\6\4\r\d\n\3\a\0\u\e\o\o\9\b\z\t\r\w\x\m\f\2\l\w\n\h\p\g\b\3\6\s\l\8\o\j\o\q\p\8\w\b\v\a\n\7\n\x\h\3\o\2\4\c\o\n\z\7\g\e\m\n\o\v\7\u\f\t\8\c\s\r\9\0\z\j\a\9\j\c\a\4\3\r\8\3\j\b\k\u\j\d\5\h\5\e\4\5\n\x\j\u\o\q\2\9\x\k\i\g\j\2\4\t\t\v\3\t\2\5\0\v\h\h\z\7\w\4\z\d\6\b\e\s\8\w\e\m\z\m\m\a\2\j\p\b\p\o\z\9\j\5\6\6\q\d\b\3\u\0\p\u\0\0\f\u\r\g\z\k\h\g\f\b\o\c\h\o\i\9\4\v\2\q\u\5\k\0\q\v\b\0\v\w\v\z\l\f\z\f\m\d\b\b\g\l\y\f\6\f\g\f\s\z\f\7\u\9\3\z\p\g\y\3\1\b\x\w\1\1\4\d\p\7\a\p\u\s\0\d\f\b\r\j\8\5\f\v\p\7\9\u\a\i\b\7\4\m\3\9\2\z\4\p\e\b\r\r\1\k\p\1\0\t\z\b\v\i\c\0\i\v\e\t\j\a\m\2\v\1\1\9\b\k\0\x\4\i\7\u\y\k\m\h\l\b\6\i\w\c\m\n\h\8\d\t\4\g\u\2\l\o\4\f\m\r\r\u\h\n\5\w\5\3\j\q\c\t\e\o\5\i\7\6\4\w\k\u\o\l\x\z\p\d\9\d\6\q\b\x\v\a\l\x\j\m\a\t\m\2\n\z\w\f\0\6\7\i\s\9\e\o\r\s\x\4\z\i\i\d\c\u\c\7\t\s\k\t\v\b\u\2\4\m\y\b\8\y\u\q\y\p\1\o\x\c\p\0\m\r\y\i\7\m\f\a\x\o\f\s\m\p\x\a\y\v\v\f\q\c\9\i\f\i\s\h\x\l\5\d\k\z\w\v\q\x\h\i\m\3\e\p\n\4\q\k\f\f\v\j\k\n\k\2\a\k\n\e\c\s\y\m\9\m\p\5\o\u\m\s\5\v\t\i\a\v\g\v\e\v\7\o\p\5\z\2\g\3\p\n\o\n\8\k\l\7\9\3\j\l\i\h\r\5\z\4\o\5\t\r\v\n\j\x\d\z\b\4\y\f\8\h\3\z\u\s\h\8\9\9\5\c\w\9\u\9\r\8\q\m\t\u\l\v\g\i\a\2\0\l\0\r\j\p\4\6\b\8\u\h\e\y\n\m\v\n\g\f\1\u\d\s\x\r\v\5\n\z\c\o\a\s\5\w\z\e\q\m\s\0\x\c\u\e\3\w\1\j\f\2\q\g\v\l\v\q\7\4\s\n\7\f\9\s\n\v\9\y\7\v\l\6\t\p\7\4\x\b\o\q\n\e\1\c\6\v\8\c\s\d\8\j\t\9\a\i\e\9\f\t\1\r\u\2\v\1\7\j\a\x\h\7\1\d\h\m\q\a\r\m\u\l\y\7\h\7\n\7\f\q\n\b\h\f\x\t\i\q\j\t\z\f\o\6\k\w\0\j\s\g\e\7\b\g\d\9\5\v\d\y\q\8\m\e ]] 00:09:44.804 09:11:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:44.804 09:11:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ lljftf5l015g8ttyejxvx2v1b2fu1uutcddnpbuwpqjgaxe3pdtg133250jwrsn56f8l2he0aga4m6uxo9cfgfeyksk9o1ulgpu4lc6q2ur3pln6lzqp1witkjrvqb8md61btnotim1dae0ykfp9fug6ei7p55lbtm55yitj5ojl1wuxvt63iixgbkm0eudqdx2kkyy19alz4muzjgxwvho0lv802s2pw3naa4kv8jaa6m18iyb7ugf2z4rxah05zmu3695tlnetypzkzdsyrf7eq923o4trb264rdn3a0ueoo9bztrwxmf2lwnhpgb36sl8ojoqp8wbvan7nxh3o24conz7gemnov7uft8csr90zja9jca43r83jbkujd5h5e45nxjuoq29xkigj24ttv3t250vhhz7w4zd6bes8wemzmma2jpbpoz9j566qdb3u0pu00furgzkhgfbochoi94v2qu5k0qvb0vwvzlfzfmdbbglyf6fgfszf7u93zpgy31bxw114dp7apus0dfbrj85fvp79uaib74m392z4pebrr1kp10tzbvic0ivetjam2v119bk0x4i7uykmhlb6iwcmnh8dt4gu2lo4fmrruhn5w53jqcteo5i764wkuolxzpd9d6qbxvalxjmatm2nzwf067is9eorsx4ziidcuc7tsktvbu24myb8yuqyp1oxcp0mryi7mfaxofsmpxayvvfqc9ifishxl5dkzwvqxhim3epn4qkffvjknk2aknecsym9mp5oums5vtiavgvev7op5z2g3pnon8kl793jlihr5z4o5trvnjxdzb4yf8h3zush8995cw9u9r8qmtulvgia20l0rjp46b8uheynmvngf1udsxrv5nzcoas5wzeqms0xcue3w1jf2qgvlvq74sn7f9snv9y7vl6tp74xboqne1c6v8csd8jt9aie9ft1ru2v17jaxh71dhmqarmuly7h7n7fqnbhfxtiqjtzfo6kw0jsge7bgd95vdyq8me == \l\l\j\f\t\f\5\l\0\1\5\g\8\t\t\y\e\j\x\v\x\2\v\1\b\2\f\u\1\u\u\t\c\d\d\n\p\b\u\w\p\q\j\g\a\x\e\3\p\d\t\g\1\3\3\2\5\0\j\w\r\s\n\5\6\f\8\l\2\h\e\0\a\g\a\4\m\6\u\x\o\9\c\f\g\f\e\y\k\s\k\9\o\1\u\l\g\p\u\4\l\c\6\q\2\u\r\3\p\l\n\6\l\z\q\p\1\w\i\t\k\j\r\v\q\b\8\m\d\6\1\b\t\n\o\t\i\m\1\d\a\e\0\y\k\f\p\9\f\u\g\6\e\i\7\p\5\5\l\b\t\m\5\5\y\i\t\j\5\o\j\l\1\w\u\x\v\t\6\3\i\i\x\g\b\k\m\0\e\u\d\q\d\x\2\k\k\y\y\1\9\a\l\z\4\m\u\z\j\g\x\w\v\h\o\0\l\v\8\0\2\s\2\p\w\3\n\a\a\4\k\v\8\j\a\a\6\m\1\8\i\y\b\7\u\g\f\2\z\4\r\x\a\h\0\5\z\m\u\3\6\9\5\t\l\n\e\t\y\p\z\k\z\d\s\y\r\f\7\e\q\9\2\3\o\4\t\r\b\2\6\4\r\d\n\3\a\0\u\e\o\o\9\b\z\t\r\w\x\m\f\2\l\w\n\h\p\g\b\3\6\s\l\8\o\j\o\q\p\8\w\b\v\a\n\7\n\x\h\3\o\2\4\c\o\n\z\7\g\e\m\n\o\v\7\u\f\t\8\c\s\r\9\0\z\j\a\9\j\c\a\4\3\r\8\3\j\b\k\u\j\d\5\h\5\e\4\5\n\x\j\u\o\q\2\9\x\k\i\g\j\2\4\t\t\v\3\t\2\5\0\v\h\h\z\7\w\4\z\d\6\b\e\s\8\w\e\m\z\m\m\a\2\j\p\b\p\o\z\9\j\5\6\6\q\d\b\3\u\0\p\u\0\0\f\u\r\g\z\k\h\g\f\b\o\c\h\o\i\9\4\v\2\q\u\5\k\0\q\v\b\0\v\w\v\z\l\f\z\f\m\d\b\b\g\l\y\f\6\f\g\f\s\z\f\7\u\9\3\z\p\g\y\3\1\b\x\w\1\1\4\d\p\7\a\p\u\s\0\d\f\b\r\j\8\5\f\v\p\7\9\u\a\i\b\7\4\m\3\9\2\z\4\p\e\b\r\r\1\k\p\1\0\t\z\b\v\i\c\0\i\v\e\t\j\a\m\2\v\1\1\9\b\k\0\x\4\i\7\u\y\k\m\h\l\b\6\i\w\c\m\n\h\8\d\t\4\g\u\2\l\o\4\f\m\r\r\u\h\n\5\w\5\3\j\q\c\t\e\o\5\i\7\6\4\w\k\u\o\l\x\z\p\d\9\d\6\q\b\x\v\a\l\x\j\m\a\t\m\2\n\z\w\f\0\6\7\i\s\9\e\o\r\s\x\4\z\i\i\d\c\u\c\7\t\s\k\t\v\b\u\2\4\m\y\b\8\y\u\q\y\p\1\o\x\c\p\0\m\r\y\i\7\m\f\a\x\o\f\s\m\p\x\a\y\v\v\f\q\c\9\i\f\i\s\h\x\l\5\d\k\z\w\v\q\x\h\i\m\3\e\p\n\4\q\k\f\f\v\j\k\n\k\2\a\k\n\e\c\s\y\m\9\m\p\5\o\u\m\s\5\v\t\i\a\v\g\v\e\v\7\o\p\5\z\2\g\3\p\n\o\n\8\k\l\7\9\3\j\l\i\h\r\5\z\4\o\5\t\r\v\n\j\x\d\z\b\4\y\f\8\h\3\z\u\s\h\8\9\9\5\c\w\9\u\9\r\8\q\m\t\u\l\v\g\i\a\2\0\l\0\r\j\p\4\6\b\8\u\h\e\y\n\m\v\n\g\f\1\u\d\s\x\r\v\5\n\z\c\o\a\s\5\w\z\e\q\m\s\0\x\c\u\e\3\w\1\j\f\2\q\g\v\l\v\q\7\4\s\n\7\f\9\s\n\v\9\y\7\v\l\6\t\p\7\4\x\b\o\q\n\e\1\c\6\v\8\c\s\d\8\j\t\9\a\i\e\9\f\t\1\r\u\2\v\1\7\j\a\x\h\7\1\d\h\m\q\a\r\m\u\l\y\7\h\7\n\7\f\q\n\b\h\f\x\t\i\q\j\t\z\f\o\6\k\w\0\j\s\g\e\7\b\g\d\9\5\v\d\y\q\8\m\e ]] 00:09:44.804 09:11:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:45.063 09:11:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:45.063 09:11:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:45.063 09:11:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:45.063 09:11:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:45.321 { 00:09:45.321 "subsystems": [ 00:09:45.321 { 00:09:45.321 "subsystem": "bdev", 00:09:45.321 "config": [ 00:09:45.321 { 00:09:45.321 "params": { 00:09:45.321 "block_size": 512, 00:09:45.321 "num_blocks": 1048576, 00:09:45.321 "name": "malloc0" 00:09:45.321 }, 00:09:45.321 "method": "bdev_malloc_create" 00:09:45.322 }, 00:09:45.322 { 00:09:45.322 "params": { 00:09:45.322 "filename": "/dev/zram1", 00:09:45.322 "name": "uring0" 00:09:45.322 }, 00:09:45.322 "method": "bdev_uring_create" 00:09:45.322 }, 00:09:45.322 { 00:09:45.322 "method": "bdev_wait_for_examine" 00:09:45.322 } 00:09:45.322 ] 00:09:45.322 } 00:09:45.322 ] 00:09:45.322 } 00:09:45.322 [2024-12-13 09:11:39.045442] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:45.322 [2024-12-13 09:11:39.045599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65157 ] 00:09:45.581 [2024-12-13 09:11:39.223193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.581 [2024-12-13 09:11:39.306429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.581 [2024-12-13 09:11:39.453757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:47.484  [2024-12-13T09:11:42.309Z] Copying: 142/512 [MB] (142 MBps) [2024-12-13T09:11:43.244Z] Copying: 278/512 [MB] (135 MBps) [2024-12-13T09:11:43.810Z] Copying: 411/512 [MB] (133 MBps) [2024-12-13T09:11:45.715Z] Copying: 512/512 [MB] (average 136 MBps) 00:09:51.825 00:09:51.825 09:11:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:51.825 09:11:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:51.825 09:11:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:51.825 09:11:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:51.825 09:11:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:51.825 09:11:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:51.825 09:11:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:51.825 09:11:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:52.084 { 00:09:52.084 "subsystems": [ 00:09:52.084 { 00:09:52.084 "subsystem": "bdev", 00:09:52.084 "config": [ 00:09:52.084 { 00:09:52.084 "params": { 00:09:52.084 "block_size": 512, 00:09:52.084 "num_blocks": 1048576, 00:09:52.084 "name": "malloc0" 00:09:52.084 }, 00:09:52.084 "method": "bdev_malloc_create" 00:09:52.084 }, 00:09:52.084 { 00:09:52.084 "params": { 00:09:52.084 "filename": "/dev/zram1", 00:09:52.084 "name": "uring0" 00:09:52.084 }, 00:09:52.084 "method": "bdev_uring_create" 00:09:52.084 }, 00:09:52.084 { 00:09:52.084 "params": { 00:09:52.084 "name": "uring0" 00:09:52.084 }, 00:09:52.084 "method": "bdev_uring_delete" 00:09:52.084 }, 00:09:52.084 { 00:09:52.084 "method": "bdev_wait_for_examine" 00:09:52.084 } 00:09:52.084 ] 00:09:52.084 } 00:09:52.084 ] 00:09:52.084 } 00:09:52.084 [2024-12-13 09:11:45.773186] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:52.084 [2024-12-13 09:11:45.773425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65243 ] 00:09:52.084 [2024-12-13 09:11:45.953818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.343 [2024-12-13 09:11:46.051544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.343 [2024-12-13 09:11:46.201776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:52.911  [2024-12-13T09:11:48.706Z] Copying: 0/0 [B] (average 0 Bps) 00:09:54.816 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:54.816 09:11:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:55.075 { 00:09:55.075 "subsystems": [ 00:09:55.075 { 00:09:55.075 "subsystem": "bdev", 00:09:55.075 "config": [ 00:09:55.075 { 00:09:55.075 "params": { 00:09:55.075 "block_size": 512, 00:09:55.075 "num_blocks": 1048576, 00:09:55.075 "name": "malloc0" 00:09:55.075 }, 00:09:55.075 "method": "bdev_malloc_create" 00:09:55.075 }, 00:09:55.075 { 00:09:55.075 "params": { 00:09:55.075 "filename": "/dev/zram1", 00:09:55.075 "name": "uring0" 00:09:55.075 }, 00:09:55.075 "method": "bdev_uring_create" 00:09:55.075 }, 00:09:55.075 { 00:09:55.075 "params": { 00:09:55.075 "name": "uring0" 00:09:55.075 }, 00:09:55.075 "method": "bdev_uring_delete" 00:09:55.075 }, 00:09:55.075 { 00:09:55.075 "method": "bdev_wait_for_examine" 00:09:55.075 } 00:09:55.075 ] 00:09:55.075 } 00:09:55.075 ] 00:09:55.075 } 00:09:55.075 [2024-12-13 09:11:48.795111] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:55.075 [2024-12-13 09:11:48.795302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65295 ] 00:09:55.334 [2024-12-13 09:11:48.966784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.334 [2024-12-13 09:11:49.067471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.593 [2024-12-13 09:11:49.223766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:56.179 [2024-12-13 09:11:49.767248] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:56.179 [2024-12-13 09:11:49.767555] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:56.179 [2024-12-13 09:11:49.767583] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:09:56.179 [2024-12-13 09:11:49.767602] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:58.090 [2024-12-13 09:11:51.495116] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:58.090 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:58.349 ************************************ 00:09:58.349 END TEST dd_uring_copy 00:09:58.349 ************************************ 00:09:58.349 00:09:58.349 real 0m28.900s 00:09:58.349 user 0m23.502s 00:09:58.349 sys 0m16.227s 00:09:58.349 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.349 09:11:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:58.349 ************************************ 00:09:58.349 END TEST spdk_dd_uring 00:09:58.349 ************************************ 00:09:58.349 00:09:58.349 real 0m29.140s 00:09:58.349 user 0m23.623s 00:09:58.349 sys 0m16.345s 00:09:58.349 09:11:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.349 09:11:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:58.349 09:11:52 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:58.349 09:11:52 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.349 09:11:52 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.349 09:11:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:58.349 ************************************ 00:09:58.349 START TEST spdk_dd_sparse 00:09:58.349 ************************************ 00:09:58.349 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:58.349 * Looking for test storage... 00:09:58.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:58.349 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.349 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.350 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.609 --rc genhtml_branch_coverage=1 00:09:58.609 --rc genhtml_function_coverage=1 00:09:58.609 --rc genhtml_legend=1 00:09:58.609 --rc geninfo_all_blocks=1 00:09:58.609 --rc geninfo_unexecuted_blocks=1 00:09:58.609 00:09:58.609 ' 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.609 --rc genhtml_branch_coverage=1 00:09:58.609 --rc genhtml_function_coverage=1 00:09:58.609 --rc genhtml_legend=1 00:09:58.609 --rc geninfo_all_blocks=1 00:09:58.609 --rc geninfo_unexecuted_blocks=1 00:09:58.609 00:09:58.609 ' 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.609 --rc genhtml_branch_coverage=1 00:09:58.609 --rc genhtml_function_coverage=1 00:09:58.609 --rc genhtml_legend=1 00:09:58.609 --rc geninfo_all_blocks=1 00:09:58.609 --rc geninfo_unexecuted_blocks=1 00:09:58.609 00:09:58.609 ' 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.609 --rc genhtml_branch_coverage=1 00:09:58.609 --rc genhtml_function_coverage=1 00:09:58.609 --rc genhtml_legend=1 00:09:58.609 --rc geninfo_all_blocks=1 00:09:58.609 --rc geninfo_unexecuted_blocks=1 00:09:58.609 00:09:58.609 ' 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.609 09:11:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:58.610 1+0 records in 00:09:58.610 1+0 records out 00:09:58.610 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00659169 s, 636 MB/s 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:58.610 1+0 records in 00:09:58.610 1+0 records out 00:09:58.610 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00701777 s, 598 MB/s 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:58.610 1+0 records in 00:09:58.610 1+0 records out 00:09:58.610 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00614542 s, 683 MB/s 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:58.610 ************************************ 00:09:58.610 START TEST dd_sparse_file_to_file 00:09:58.610 ************************************ 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:58.610 09:11:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:58.610 { 00:09:58.610 "subsystems": [ 00:09:58.610 { 00:09:58.610 "subsystem": "bdev", 00:09:58.610 "config": [ 00:09:58.610 { 00:09:58.610 "params": { 00:09:58.610 "block_size": 4096, 00:09:58.610 "filename": "dd_sparse_aio_disk", 00:09:58.610 "name": "dd_aio" 00:09:58.610 }, 00:09:58.610 "method": "bdev_aio_create" 00:09:58.610 }, 00:09:58.610 { 00:09:58.610 "params": { 00:09:58.610 "lvs_name": "dd_lvstore", 00:09:58.610 "bdev_name": "dd_aio" 00:09:58.610 }, 00:09:58.610 "method": "bdev_lvol_create_lvstore" 00:09:58.610 }, 00:09:58.610 { 00:09:58.610 "method": "bdev_wait_for_examine" 00:09:58.610 } 00:09:58.610 ] 00:09:58.610 } 00:09:58.610 ] 00:09:58.610 } 00:09:58.610 [2024-12-13 09:11:52.419729] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:58.610 [2024-12-13 09:11:52.420031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65419 ] 00:09:58.869 [2024-12-13 09:11:52.586286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.869 [2024-12-13 09:11:52.680784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.128 [2024-12-13 09:11:52.837267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:59.386  [2024-12-13T09:11:54.299Z] Copying: 12/36 [MB] (average 1000 MBps) 00:10:00.409 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:10:00.409 ************************************ 00:10:00.409 END TEST dd_sparse_file_to_file 00:10:00.409 ************************************ 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:00.409 00:10:00.409 real 0m1.703s 00:10:00.409 user 0m1.404s 00:10:00.409 sys 0m0.908s 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:00.409 ************************************ 00:10:00.409 START TEST dd_sparse_file_to_bdev 00:10:00.409 ************************************ 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:10:00.409 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:00.410 09:11:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:00.410 { 00:10:00.410 "subsystems": [ 00:10:00.410 { 00:10:00.410 "subsystem": "bdev", 00:10:00.410 "config": [ 00:10:00.410 { 00:10:00.410 "params": { 00:10:00.410 "block_size": 4096, 00:10:00.410 "filename": "dd_sparse_aio_disk", 00:10:00.410 "name": "dd_aio" 00:10:00.410 }, 00:10:00.410 "method": "bdev_aio_create" 00:10:00.410 }, 00:10:00.410 { 00:10:00.410 "params": { 00:10:00.410 "lvs_name": "dd_lvstore", 00:10:00.410 "lvol_name": "dd_lvol", 00:10:00.410 "size_in_mib": 36, 00:10:00.410 "thin_provision": true 00:10:00.410 }, 00:10:00.410 "method": "bdev_lvol_create" 00:10:00.410 }, 00:10:00.410 { 00:10:00.410 "method": "bdev_wait_for_examine" 00:10:00.410 } 00:10:00.410 ] 00:10:00.410 } 00:10:00.410 ] 00:10:00.410 } 00:10:00.410 [2024-12-13 09:11:54.174057] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:00.410 [2024-12-13 09:11:54.174427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65479 ] 00:10:00.668 [2024-12-13 09:11:54.342410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.668 [2024-12-13 09:11:54.429126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.927 [2024-12-13 09:11:54.591247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:00.927  [2024-12-13T09:11:55.753Z] Copying: 12/36 [MB] (average 521 MBps) 00:10:01.863 00:10:02.122 ************************************ 00:10:02.122 END TEST dd_sparse_file_to_bdev 00:10:02.122 ************************************ 00:10:02.122 00:10:02.122 real 0m1.669s 00:10:02.122 user 0m1.426s 00:10:02.122 sys 0m0.902s 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:02.122 ************************************ 00:10:02.122 START TEST dd_sparse_bdev_to_file 00:10:02.122 ************************************ 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:02.122 09:11:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:02.122 { 00:10:02.122 "subsystems": [ 00:10:02.122 { 00:10:02.122 "subsystem": "bdev", 00:10:02.122 "config": [ 00:10:02.122 { 00:10:02.122 "params": { 00:10:02.122 "block_size": 4096, 00:10:02.122 "filename": "dd_sparse_aio_disk", 00:10:02.122 "name": "dd_aio" 00:10:02.122 }, 00:10:02.122 "method": "bdev_aio_create" 00:10:02.122 }, 00:10:02.122 { 00:10:02.122 "method": "bdev_wait_for_examine" 00:10:02.122 } 00:10:02.122 ] 00:10:02.122 } 00:10:02.122 ] 00:10:02.122 } 00:10:02.122 [2024-12-13 09:11:55.898860] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:02.122 [2024-12-13 09:11:55.899284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65523 ] 00:10:02.381 [2024-12-13 09:11:56.066228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.381 [2024-12-13 09:11:56.154758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.640 [2024-12-13 09:11:56.326384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.640  [2024-12-13T09:11:57.466Z] Copying: 12/36 [MB] (average 1333 MBps) 00:10:03.576 00:10:03.576 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:10:03.576 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:10:03.576 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:10:03.576 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:10:03.576 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:10:03.835 ************************************ 00:10:03.835 END TEST dd_sparse_bdev_to_file 00:10:03.835 ************************************ 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:03.835 00:10:03.835 real 0m1.669s 00:10:03.835 user 0m1.369s 00:10:03.835 sys 0m0.919s 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:10:03.835 ************************************ 00:10:03.835 END TEST spdk_dd_sparse 00:10:03.835 ************************************ 00:10:03.835 00:10:03.835 real 0m5.464s 00:10:03.835 user 0m4.381s 00:10:03.835 sys 0m2.947s 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.835 09:11:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:03.835 09:11:57 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:03.835 09:11:57 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.835 09:11:57 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.835 09:11:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:03.835 ************************************ 00:10:03.835 START TEST spdk_dd_negative 00:10:03.835 ************************************ 00:10:03.835 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:03.835 * Looking for test storage... 00:10:03.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:03.835 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:03.835 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:10:03.835 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.095 --rc genhtml_branch_coverage=1 00:10:04.095 --rc genhtml_function_coverage=1 00:10:04.095 --rc genhtml_legend=1 00:10:04.095 --rc geninfo_all_blocks=1 00:10:04.095 --rc geninfo_unexecuted_blocks=1 00:10:04.095 00:10:04.095 ' 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.095 --rc genhtml_branch_coverage=1 00:10:04.095 --rc genhtml_function_coverage=1 00:10:04.095 --rc genhtml_legend=1 00:10:04.095 --rc geninfo_all_blocks=1 00:10:04.095 --rc geninfo_unexecuted_blocks=1 00:10:04.095 00:10:04.095 ' 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.095 --rc genhtml_branch_coverage=1 00:10:04.095 --rc genhtml_function_coverage=1 00:10:04.095 --rc genhtml_legend=1 00:10:04.095 --rc geninfo_all_blocks=1 00:10:04.095 --rc geninfo_unexecuted_blocks=1 00:10:04.095 00:10:04.095 ' 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.095 --rc genhtml_branch_coverage=1 00:10:04.095 --rc genhtml_function_coverage=1 00:10:04.095 --rc genhtml_legend=1 00:10:04.095 --rc geninfo_all_blocks=1 00:10:04.095 --rc geninfo_unexecuted_blocks=1 00:10:04.095 00:10:04.095 ' 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:04.095 ************************************ 00:10:04.095 START TEST dd_invalid_arguments 00:10:04.095 ************************************ 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:04.095 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:04.095 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:10:04.095 00:10:04.095 CPU options: 00:10:04.095 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:10:04.095 (like [0,1,10]) 00:10:04.095 --lcores lcore to CPU mapping list. The list is in the format: 00:10:04.095 [<,lcores[@CPUs]>...] 00:10:04.095 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:10:04.095 Within the group, '-' is used for range separator, 00:10:04.095 ',' is used for single number separator. 00:10:04.095 '( )' can be omitted for single element group, 00:10:04.095 '@' can be omitted if cpus and lcores have the same value 00:10:04.095 --disable-cpumask-locks Disable CPU core lock files. 00:10:04.095 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:10:04.095 pollers in the app support interrupt mode) 00:10:04.095 -p, --main-core main (primary) core for DPDK 00:10:04.095 00:10:04.095 Configuration options: 00:10:04.095 -c, --config, --json JSON config file 00:10:04.095 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:10:04.095 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:10:04.095 --wait-for-rpc wait for RPCs to initialize subsystems 00:10:04.095 --rpcs-allowed comma-separated list of permitted RPCS 00:10:04.095 --json-ignore-init-errors don't exit on invalid config entry 00:10:04.095 00:10:04.095 Memory options: 00:10:04.095 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:10:04.095 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:10:04.095 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:10:04.095 -R, --huge-unlink unlink huge files after initialization 00:10:04.095 -n, --mem-channels number of memory channels used for DPDK 00:10:04.095 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:10:04.095 --msg-mempool-size global message memory pool size in count (default: 262143) 00:10:04.095 --no-huge run without using hugepages 00:10:04.095 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:10:04.095 -i, --shm-id shared memory ID (optional) 00:10:04.095 -g, --single-file-segments force creating just one hugetlbfs file 00:10:04.095 00:10:04.095 PCI options: 00:10:04.095 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:10:04.095 -B, --pci-blocked pci addr to block (can be used more than once) 00:10:04.096 -u, --no-pci disable PCI access 00:10:04.096 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:10:04.096 00:10:04.096 Log options: 00:10:04.096 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:10:04.096 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:10:04.096 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:10:04.096 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:10:04.096 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:10:04.096 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:10:04.096 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:10:04.096 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:10:04.096 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:10:04.096 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:10:04.096 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:10:04.096 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:10:04.096 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:10:04.096 --silence-noticelog disable notice level logging to stderr 00:10:04.096 00:10:04.096 Trace options: 00:10:04.096 --num-trace-entries number of trace entries for each core, must be power of 2, 00:10:04.096 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:10:04.096 [2024-12-13 09:11:57.877355] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:10:04.096 setting 0 to disable trace (default 32768) 00:10:04.096 Tracepoints vary in size and can use more than one trace entry. 00:10:04.096 -e, --tpoint-group [:] 00:10:04.096 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:10:04.096 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:10:04.096 blob, bdev_raid, scheduler, all). 00:10:04.096 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:10:04.096 a tracepoint group. First tpoint inside a group can be enabled by 00:10:04.096 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:10:04.096 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:10:04.096 in /include/spdk_internal/trace_defs.h 00:10:04.096 00:10:04.096 Other options: 00:10:04.096 -h, --help show this usage 00:10:04.096 -v, --version print SPDK version 00:10:04.096 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:10:04.096 --env-context Opaque context for use of the env implementation 00:10:04.096 00:10:04.096 Application specific: 00:10:04.096 [--------- DD Options ---------] 00:10:04.096 --if Input file. Must specify either --if or --ib. 00:10:04.096 --ib Input bdev. Must specifier either --if or --ib 00:10:04.096 --of Output file. Must specify either --of or --ob. 00:10:04.096 --ob Output bdev. Must specify either --of or --ob. 00:10:04.096 --iflag Input file flags. 00:10:04.096 --oflag Output file flags. 00:10:04.096 --bs I/O unit size (default: 4096) 00:10:04.096 --qd Queue depth (default: 2) 00:10:04.096 --count I/O unit count. The number of I/O units to copy. (default: all) 00:10:04.096 --skip Skip this many I/O units at start of input. (default: 0) 00:10:04.096 --seek Skip this many I/O units at start of output. (default: 0) 00:10:04.096 --aio Force usage of AIO. (by default io_uring is used if available) 00:10:04.096 --sparse Enable hole skipping in input target 00:10:04.096 Available iflag and oflag values: 00:10:04.096 append - append mode 00:10:04.096 direct - use direct I/O for data 00:10:04.096 directory - fail unless a directory 00:10:04.096 dsync - use synchronized I/O for data 00:10:04.096 noatime - do not update access time 00:10:04.096 noctty - do not assign controlling terminal from file 00:10:04.096 nofollow - do not follow symlinks 00:10:04.096 nonblock - use non-blocking I/O 00:10:04.096 sync - use synchronized I/O for data and metadata 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:04.096 00:10:04.096 real 0m0.144s 00:10:04.096 user 0m0.087s 00:10:04.096 sys 0m0.056s 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.096 ************************************ 00:10:04.096 END TEST dd_invalid_arguments 00:10:04.096 ************************************ 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:04.096 ************************************ 00:10:04.096 START TEST dd_double_input 00:10:04.096 ************************************ 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:04.096 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:10:04.355 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:04.355 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.355 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.355 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.355 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.355 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.355 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.355 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.355 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:04.355 09:11:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:04.355 [2024-12-13 09:11:58.088756] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:04.355 00:10:04.355 real 0m0.167s 00:10:04.355 user 0m0.097s 00:10:04.355 sys 0m0.068s 00:10:04.355 ************************************ 00:10:04.355 END TEST dd_double_input 00:10:04.355 ************************************ 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:04.355 ************************************ 00:10:04.355 START TEST dd_double_output 00:10:04.355 ************************************ 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:04.355 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:04.613 [2024-12-13 09:11:58.315050] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:10:04.613 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:10:04.613 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:04.613 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:04.613 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:04.613 00:10:04.613 real 0m0.177s 00:10:04.613 user 0m0.103s 00:10:04.613 sys 0m0.072s 00:10:04.613 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.613 09:11:58 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:10:04.613 ************************************ 00:10:04.613 END TEST dd_double_output 00:10:04.613 ************************************ 00:10:04.613 09:11:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:04.614 ************************************ 00:10:04.614 START TEST dd_no_input 00:10:04.614 ************************************ 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:04.614 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:04.873 [2024-12-13 09:11:58.520095] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:04.873 00:10:04.873 real 0m0.141s 00:10:04.873 user 0m0.084s 00:10:04.873 sys 0m0.055s 00:10:04.873 ************************************ 00:10:04.873 END TEST dd_no_input 00:10:04.873 ************************************ 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:04.873 ************************************ 00:10:04.873 START TEST dd_no_output 00:10:04.873 ************************************ 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:04.873 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:04.873 [2024-12-13 09:11:58.736597] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:05.133 00:10:05.133 real 0m0.170s 00:10:05.133 user 0m0.094s 00:10:05.133 sys 0m0.075s 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:10:05.133 ************************************ 00:10:05.133 END TEST dd_no_output 00:10:05.133 ************************************ 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:05.133 ************************************ 00:10:05.133 START TEST dd_wrong_blocksize 00:10:05.133 ************************************ 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:05.133 [2024-12-13 09:11:58.943816] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:05.133 00:10:05.133 real 0m0.146s 00:10:05.133 user 0m0.083s 00:10:05.133 sys 0m0.061s 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.133 ************************************ 00:10:05.133 END TEST dd_wrong_blocksize 00:10:05.133 ************************************ 00:10:05.133 09:11:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:05.392 ************************************ 00:10:05.392 START TEST dd_smaller_blocksize 00:10:05.392 ************************************ 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:05.392 09:11:59 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:05.392 [2024-12-13 09:11:59.165860] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:05.392 [2024-12-13 09:11:59.166040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65772 ] 00:10:05.652 [2024-12-13 09:11:59.346504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.652 [2024-12-13 09:11:59.434390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.910 [2024-12-13 09:11:59.603894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.169 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:10:06.427 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:10:06.427 [2024-12-13 09:12:00.292734] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:10:06.427 [2024-12-13 09:12:00.292830] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:07.362 [2024-12-13 09:12:00.978883] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:07.362 09:12:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:10:07.362 09:12:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:07.362 09:12:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:10:07.362 09:12:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:10:07.362 09:12:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:10:07.362 09:12:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:07.362 00:10:07.362 real 0m2.195s 00:10:07.362 user 0m1.437s 00:10:07.362 sys 0m0.646s 00:10:07.362 09:12:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.362 ************************************ 00:10:07.362 END TEST dd_smaller_blocksize 00:10:07.362 ************************************ 00:10:07.362 09:12:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:07.621 ************************************ 00:10:07.621 START TEST dd_invalid_count 00:10:07.621 ************************************ 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:07.621 [2024-12-13 09:12:01.413257] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:07.621 00:10:07.621 real 0m0.173s 00:10:07.621 user 0m0.095s 00:10:07.621 sys 0m0.076s 00:10:07.621 ************************************ 00:10:07.621 END TEST dd_invalid_count 00:10:07.621 ************************************ 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.621 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:07.880 ************************************ 00:10:07.880 START TEST dd_invalid_oflag 00:10:07.880 ************************************ 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:07.880 [2024-12-13 09:12:01.636132] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:07.880 00:10:07.880 real 0m0.174s 00:10:07.880 user 0m0.093s 00:10:07.880 sys 0m0.079s 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:10:07.880 ************************************ 00:10:07.880 END TEST dd_invalid_oflag 00:10:07.880 ************************************ 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:07.880 ************************************ 00:10:07.880 START TEST dd_invalid_iflag 00:10:07.880 ************************************ 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:07.880 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:08.139 [2024-12-13 09:12:01.865746] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:08.139 00:10:08.139 real 0m0.171s 00:10:08.139 user 0m0.097s 00:10:08.139 sys 0m0.072s 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.139 ************************************ 00:10:08.139 END TEST dd_invalid_iflag 00:10:08.139 ************************************ 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:08.139 ************************************ 00:10:08.139 START TEST dd_unknown_flag 00:10:08.139 ************************************ 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:08.139 09:12:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:08.397 [2024-12-13 09:12:02.094026] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:08.397 [2024-12-13 09:12:02.094202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65886 ] 00:10:08.397 [2024-12-13 09:12:02.278161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.655 [2024-12-13 09:12:02.401515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.913 [2024-12-13 09:12:02.576123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:08.913 [2024-12-13 09:12:02.680027] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:10:08.913 [2024-12-13 09:12:02.680126] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:08.913 [2024-12-13 09:12:02.680212] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:10:08.913 [2024-12-13 09:12:02.680248] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:08.913 [2024-12-13 09:12:02.680567] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:10:08.913 [2024-12-13 09:12:02.680606] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:08.913 [2024-12-13 09:12:02.680694] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:08.913 [2024-12-13 09:12:02.680729] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:09.849 [2024-12-13 09:12:03.413938] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:09.849 00:10:09.849 real 0m1.697s 00:10:09.849 user 0m1.385s 00:10:09.849 sys 0m0.203s 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.849 ************************************ 00:10:09.849 END TEST dd_unknown_flag 00:10:09.849 ************************************ 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:09.849 ************************************ 00:10:09.849 START TEST dd_invalid_json 00:10:09.849 ************************************ 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:09.849 09:12:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:10.108 [2024-12-13 09:12:03.820214] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:10.108 [2024-12-13 09:12:03.820415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65932 ] 00:10:10.108 [2024-12-13 09:12:03.989135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.366 [2024-12-13 09:12:04.077791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.366 [2024-12-13 09:12:04.077912] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:10:10.366 [2024-12-13 09:12:04.077933] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:10.366 [2024-12-13 09:12:04.077948] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:10.366 [2024-12-13 09:12:04.078026] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:10.625 00:10:10.625 real 0m0.589s 00:10:10.625 user 0m0.360s 00:10:10.625 sys 0m0.127s 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.625 ************************************ 00:10:10.625 END TEST dd_invalid_json 00:10:10.625 ************************************ 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:10.625 ************************************ 00:10:10.625 START TEST dd_invalid_seek 00:10:10.625 ************************************ 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:10.625 09:12:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:10.625 { 00:10:10.625 "subsystems": [ 00:10:10.625 { 00:10:10.625 "subsystem": "bdev", 00:10:10.625 "config": [ 00:10:10.625 { 00:10:10.625 "params": { 00:10:10.625 "block_size": 512, 00:10:10.625 "num_blocks": 512, 00:10:10.625 "name": "malloc0" 00:10:10.625 }, 00:10:10.625 "method": "bdev_malloc_create" 00:10:10.625 }, 00:10:10.625 { 00:10:10.625 "params": { 00:10:10.625 "block_size": 512, 00:10:10.625 "num_blocks": 512, 00:10:10.625 "name": "malloc1" 00:10:10.625 }, 00:10:10.625 "method": "bdev_malloc_create" 00:10:10.625 }, 00:10:10.625 { 00:10:10.625 "method": "bdev_wait_for_examine" 00:10:10.625 } 00:10:10.625 ] 00:10:10.625 } 00:10:10.625 ] 00:10:10.625 } 00:10:10.625 [2024-12-13 09:12:04.491103] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:10.626 [2024-12-13 09:12:04.491274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65963 ] 00:10:10.885 [2024-12-13 09:12:04.669943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.885 [2024-12-13 09:12:04.767552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.143 [2024-12-13 09:12:04.920823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.405 [2024-12-13 09:12:05.049370] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:10:11.405 [2024-12-13 09:12:05.049504] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:11.979 [2024-12-13 09:12:05.736898] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:12.237 09:12:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:10:12.237 09:12:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:12.237 09:12:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:10:12.237 09:12:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:10:12.237 09:12:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:10:12.237 09:12:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:12.237 00:10:12.237 real 0m1.610s 00:10:12.237 user 0m1.345s 00:10:12.237 sys 0m0.213s 00:10:12.237 09:12:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.237 ************************************ 00:10:12.237 END TEST dd_invalid_seek 00:10:12.237 ************************************ 00:10:12.237 09:12:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:12.238 ************************************ 00:10:12.238 START TEST dd_invalid_skip 00:10:12.238 ************************************ 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:12.238 09:12:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:12.238 { 00:10:12.238 "subsystems": [ 00:10:12.238 { 00:10:12.238 "subsystem": "bdev", 00:10:12.238 "config": [ 00:10:12.238 { 00:10:12.238 "params": { 00:10:12.238 "block_size": 512, 00:10:12.238 "num_blocks": 512, 00:10:12.238 "name": "malloc0" 00:10:12.238 }, 00:10:12.238 "method": "bdev_malloc_create" 00:10:12.238 }, 00:10:12.238 { 00:10:12.238 "params": { 00:10:12.238 "block_size": 512, 00:10:12.238 "num_blocks": 512, 00:10:12.238 "name": "malloc1" 00:10:12.238 }, 00:10:12.238 "method": "bdev_malloc_create" 00:10:12.238 }, 00:10:12.238 { 00:10:12.238 "method": "bdev_wait_for_examine" 00:10:12.238 } 00:10:12.238 ] 00:10:12.238 } 00:10:12.238 ] 00:10:12.238 } 00:10:12.496 [2024-12-13 09:12:06.137822] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:12.496 [2024-12-13 09:12:06.137999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66008 ] 00:10:12.496 [2024-12-13 09:12:06.302331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.754 [2024-12-13 09:12:06.398177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.754 [2024-12-13 09:12:06.556288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:13.012 [2024-12-13 09:12:06.687894] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:10:13.012 [2024-12-13 09:12:06.687997] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:13.579 [2024-12-13 09:12:07.349935] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:13.837 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:10:13.837 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:13.837 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:10:13.837 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:10:13.837 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:10:13.837 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:13.837 00:10:13.837 real 0m1.556s 00:10:13.837 user 0m1.315s 00:10:13.837 sys 0m0.192s 00:10:13.837 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:10:13.838 ************************************ 00:10:13.838 END TEST dd_invalid_skip 00:10:13.838 ************************************ 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:13.838 ************************************ 00:10:13.838 START TEST dd_invalid_input_count 00:10:13.838 ************************************ 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:13.838 09:12:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:14.096 { 00:10:14.096 "subsystems": [ 00:10:14.096 { 00:10:14.096 "subsystem": "bdev", 00:10:14.096 "config": [ 00:10:14.096 { 00:10:14.096 "params": { 00:10:14.096 "block_size": 512, 00:10:14.096 "num_blocks": 512, 00:10:14.096 "name": "malloc0" 00:10:14.096 }, 00:10:14.096 "method": "bdev_malloc_create" 00:10:14.096 }, 00:10:14.096 { 00:10:14.096 "params": { 00:10:14.096 "block_size": 512, 00:10:14.096 "num_blocks": 512, 00:10:14.096 "name": "malloc1" 00:10:14.096 }, 00:10:14.096 "method": "bdev_malloc_create" 00:10:14.096 }, 00:10:14.096 { 00:10:14.096 "method": "bdev_wait_for_examine" 00:10:14.096 } 00:10:14.096 ] 00:10:14.096 } 00:10:14.096 ] 00:10:14.096 } 00:10:14.096 [2024-12-13 09:12:07.790419] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:14.096 [2024-12-13 09:12:07.790592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66058 ] 00:10:14.096 [2024-12-13 09:12:07.969329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.355 [2024-12-13 09:12:08.063432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.355 [2024-12-13 09:12:08.216061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:14.613 [2024-12-13 09:12:08.346347] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:10:14.613 [2024-12-13 09:12:08.346476] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:15.213 [2024-12-13 09:12:08.981988] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:15.497 00:10:15.497 real 0m1.581s 00:10:15.497 user 0m1.343s 00:10:15.497 sys 0m0.218s 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:10:15.497 ************************************ 00:10:15.497 END TEST dd_invalid_input_count 00:10:15.497 ************************************ 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:15.497 ************************************ 00:10:15.497 START TEST dd_invalid_output_count 00:10:15.497 ************************************ 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:15.497 09:12:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:15.497 { 00:10:15.497 "subsystems": [ 00:10:15.497 { 00:10:15.497 "subsystem": "bdev", 00:10:15.497 "config": [ 00:10:15.497 { 00:10:15.497 "params": { 00:10:15.497 "block_size": 512, 00:10:15.497 "num_blocks": 512, 00:10:15.497 "name": "malloc0" 00:10:15.497 }, 00:10:15.497 "method": "bdev_malloc_create" 00:10:15.497 }, 00:10:15.497 { 00:10:15.497 "method": "bdev_wait_for_examine" 00:10:15.497 } 00:10:15.497 ] 00:10:15.497 } 00:10:15.497 ] 00:10:15.497 } 00:10:15.756 [2024-12-13 09:12:09.416913] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:15.756 [2024-12-13 09:12:09.417107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66099 ] 00:10:15.756 [2024-12-13 09:12:09.596990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.015 [2024-12-13 09:12:09.683666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.015 [2024-12-13 09:12:09.836659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:16.273 [2024-12-13 09:12:09.953671] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:10:16.273 [2024-12-13 09:12:09.953777] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:16.841 [2024-12-13 09:12:10.579624] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:17.100 00:10:17.100 real 0m1.510s 00:10:17.100 user 0m1.247s 00:10:17.100 sys 0m0.211s 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:10:17.100 ************************************ 00:10:17.100 END TEST dd_invalid_output_count 00:10:17.100 ************************************ 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:17.100 ************************************ 00:10:17.100 START TEST dd_bs_not_multiple 00:10:17.100 ************************************ 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:17.100 09:12:10 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:17.100 { 00:10:17.100 "subsystems": [ 00:10:17.100 { 00:10:17.100 "subsystem": "bdev", 00:10:17.100 "config": [ 00:10:17.100 { 00:10:17.100 "params": { 00:10:17.100 "block_size": 512, 00:10:17.100 "num_blocks": 512, 00:10:17.100 "name": "malloc0" 00:10:17.100 }, 00:10:17.100 "method": "bdev_malloc_create" 00:10:17.100 }, 00:10:17.100 { 00:10:17.100 "params": { 00:10:17.100 "block_size": 512, 00:10:17.100 "num_blocks": 512, 00:10:17.100 "name": "malloc1" 00:10:17.100 }, 00:10:17.100 "method": "bdev_malloc_create" 00:10:17.100 }, 00:10:17.100 { 00:10:17.100 "method": "bdev_wait_for_examine" 00:10:17.100 } 00:10:17.100 ] 00:10:17.100 } 00:10:17.100 ] 00:10:17.100 } 00:10:17.100 [2024-12-13 09:12:10.978976] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:17.100 [2024-12-13 09:12:10.979691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66143 ] 00:10:17.359 [2024-12-13 09:12:11.160952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.617 [2024-12-13 09:12:11.259697] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.617 [2024-12-13 09:12:11.420063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.876 [2024-12-13 09:12:11.534786] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:10:17.876 [2024-12-13 09:12:11.534872] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:18.443 [2024-12-13 09:12:12.151019] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:10:18.703 09:12:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:10:18.703 09:12:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:18.703 09:12:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:10:18.703 09:12:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:10:18.703 09:12:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:10:18.703 09:12:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:18.703 00:10:18.703 real 0m1.517s 00:10:18.703 user 0m1.251s 00:10:18.703 sys 0m0.222s 00:10:18.703 09:12:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.703 09:12:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 ************************************ 00:10:18.703 END TEST dd_bs_not_multiple 00:10:18.703 ************************************ 00:10:18.703 00:10:18.703 real 0m14.824s 00:10:18.703 user 0m10.929s 00:10:18.703 sys 0m3.253s 00:10:18.703 09:12:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.703 ************************************ 00:10:18.703 END TEST spdk_dd_negative 00:10:18.703 ************************************ 00:10:18.703 09:12:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 00:10:18.703 real 2m47.119s 00:10:18.703 user 2m14.168s 00:10:18.703 sys 1m2.632s 00:10:18.703 09:12:12 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.703 09:12:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 ************************************ 00:10:18.703 END TEST spdk_dd 00:10:18.703 ************************************ 00:10:18.703 09:12:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:18.703 09:12:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:18.703 09:12:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:18.703 09:12:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.703 09:12:12 -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 09:12:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:18.703 09:12:12 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:10:18.703 09:12:12 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:10:18.703 09:12:12 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:10:18.703 09:12:12 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:10:18.703 09:12:12 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:10:18.703 09:12:12 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:18.703 09:12:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.703 09:12:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.703 09:12:12 -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 ************************************ 00:10:18.703 START TEST nvmf_tcp 00:10:18.703 ************************************ 00:10:18.703 09:12:12 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:18.961 * Looking for test storage... 00:10:18.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:18.961 09:12:12 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.961 09:12:12 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.961 09:12:12 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.961 09:12:12 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.961 09:12:12 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.962 09:12:12 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:18.962 09:12:12 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.962 09:12:12 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.962 --rc genhtml_branch_coverage=1 00:10:18.962 --rc genhtml_function_coverage=1 00:10:18.962 --rc genhtml_legend=1 00:10:18.962 --rc geninfo_all_blocks=1 00:10:18.962 --rc geninfo_unexecuted_blocks=1 00:10:18.962 00:10:18.962 ' 00:10:18.962 09:12:12 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.962 --rc genhtml_branch_coverage=1 00:10:18.962 --rc genhtml_function_coverage=1 00:10:18.962 --rc genhtml_legend=1 00:10:18.962 --rc geninfo_all_blocks=1 00:10:18.962 --rc geninfo_unexecuted_blocks=1 00:10:18.962 00:10:18.962 ' 00:10:18.962 09:12:12 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.962 --rc genhtml_branch_coverage=1 00:10:18.962 --rc genhtml_function_coverage=1 00:10:18.962 --rc genhtml_legend=1 00:10:18.962 --rc geninfo_all_blocks=1 00:10:18.962 --rc geninfo_unexecuted_blocks=1 00:10:18.962 00:10:18.962 ' 00:10:18.962 09:12:12 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.962 --rc genhtml_branch_coverage=1 00:10:18.962 --rc genhtml_function_coverage=1 00:10:18.962 --rc genhtml_legend=1 00:10:18.962 --rc geninfo_all_blocks=1 00:10:18.962 --rc geninfo_unexecuted_blocks=1 00:10:18.962 00:10:18.962 ' 00:10:18.962 09:12:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:18.962 09:12:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:18.962 09:12:12 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:18.962 09:12:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.962 09:12:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.962 09:12:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.962 ************************************ 00:10:18.962 START TEST nvmf_target_core 00:10:18.962 ************************************ 00:10:18.962 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:18.962 * Looking for test storage... 00:10:18.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:18.962 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.962 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.962 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:10:19.220 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:19.220 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.220 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.220 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.220 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.220 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.220 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:19.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.221 --rc genhtml_branch_coverage=1 00:10:19.221 --rc genhtml_function_coverage=1 00:10:19.221 --rc genhtml_legend=1 00:10:19.221 --rc geninfo_all_blocks=1 00:10:19.221 --rc geninfo_unexecuted_blocks=1 00:10:19.221 00:10:19.221 ' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:19.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.221 --rc genhtml_branch_coverage=1 00:10:19.221 --rc genhtml_function_coverage=1 00:10:19.221 --rc genhtml_legend=1 00:10:19.221 --rc geninfo_all_blocks=1 00:10:19.221 --rc geninfo_unexecuted_blocks=1 00:10:19.221 00:10:19.221 ' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:19.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.221 --rc genhtml_branch_coverage=1 00:10:19.221 --rc genhtml_function_coverage=1 00:10:19.221 --rc genhtml_legend=1 00:10:19.221 --rc geninfo_all_blocks=1 00:10:19.221 --rc geninfo_unexecuted_blocks=1 00:10:19.221 00:10:19.221 ' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:19.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.221 --rc genhtml_branch_coverage=1 00:10:19.221 --rc genhtml_function_coverage=1 00:10:19.221 --rc genhtml_legend=1 00:10:19.221 --rc geninfo_all_blocks=1 00:10:19.221 --rc geninfo_unexecuted_blocks=1 00:10:19.221 00:10:19.221 ' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.221 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.221 ************************************ 00:10:19.221 START TEST nvmf_host_management 00:10:19.221 ************************************ 00:10:19.221 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:19.221 * Looking for test storage... 00:10:19.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:19.221 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:19.221 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:10:19.221 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.480 --rc genhtml_branch_coverage=1 00:10:19.480 --rc genhtml_function_coverage=1 00:10:19.480 --rc genhtml_legend=1 00:10:19.480 --rc geninfo_all_blocks=1 00:10:19.480 --rc geninfo_unexecuted_blocks=1 00:10:19.480 00:10:19.480 ' 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.480 --rc genhtml_branch_coverage=1 00:10:19.480 --rc genhtml_function_coverage=1 00:10:19.480 --rc genhtml_legend=1 00:10:19.480 --rc geninfo_all_blocks=1 00:10:19.480 --rc geninfo_unexecuted_blocks=1 00:10:19.480 00:10:19.480 ' 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.480 --rc genhtml_branch_coverage=1 00:10:19.480 --rc genhtml_function_coverage=1 00:10:19.480 --rc genhtml_legend=1 00:10:19.480 --rc geninfo_all_blocks=1 00:10:19.480 --rc geninfo_unexecuted_blocks=1 00:10:19.480 00:10:19.480 ' 00:10:19.480 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:19.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.480 --rc genhtml_branch_coverage=1 00:10:19.480 --rc genhtml_function_coverage=1 00:10:19.481 --rc genhtml_legend=1 00:10:19.481 --rc geninfo_all_blocks=1 00:10:19.481 --rc geninfo_unexecuted_blocks=1 00:10:19.481 00:10:19.481 ' 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.481 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:19.481 Cannot find device "nvmf_init_br" 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:19.481 Cannot find device "nvmf_init_br2" 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:19.481 Cannot find device "nvmf_tgt_br" 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.481 Cannot find device "nvmf_tgt_br2" 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:19.481 Cannot find device "nvmf_init_br" 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:19.481 Cannot find device "nvmf_init_br2" 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:19.481 Cannot find device "nvmf_tgt_br" 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:10:19.481 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:19.482 Cannot find device "nvmf_tgt_br2" 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:19.482 Cannot find device "nvmf_br" 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:19.482 Cannot find device "nvmf_init_if" 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:19.482 Cannot find device "nvmf_init_if2" 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:19.482 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:19.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:19.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:10:19.740 00:10:19.740 --- 10.0.0.3 ping statistics --- 00:10:19.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.740 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:19.740 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:19.740 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:10:19.740 00:10:19.740 --- 10.0.0.4 ping statistics --- 00:10:19.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.740 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:19.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:19.740 00:10:19.740 --- 10.0.0.1 ping statistics --- 00:10:19.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.740 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:19.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:10:19.740 00:10:19.740 --- 10.0.0.2 ping statistics --- 00:10:19.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.740 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:19.740 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=66498 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 66498 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 66498 ']' 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.999 09:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.999 [2024-12-13 09:12:13.759238] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:19.999 [2024-12-13 09:12:13.759427] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.257 [2024-12-13 09:12:13.950958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.257 [2024-12-13 09:12:14.083021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.257 [2024-12-13 09:12:14.083092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.257 [2024-12-13 09:12:14.083118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.257 [2024-12-13 09:12:14.083144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.257 [2024-12-13 09:12:14.083161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.257 [2024-12-13 09:12:14.085351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.257 [2024-12-13 09:12:14.085445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.257 [2024-12-13 09:12:14.085831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:20.257 [2024-12-13 09:12:14.085839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.514 [2024-12-13 09:12:14.333288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:21.080 [2024-12-13 09:12:14.772249] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:21.080 Malloc0 00:10:21.080 [2024-12-13 09:12:14.891710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:21.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=66552 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 66552 /var/tmp/bdevperf.sock 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 66552 ']' 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:21.080 { 00:10:21.080 "params": { 00:10:21.080 "name": "Nvme$subsystem", 00:10:21.080 "trtype": "$TEST_TRANSPORT", 00:10:21.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.080 "adrfam": "ipv4", 00:10:21.080 "trsvcid": "$NVMF_PORT", 00:10:21.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.080 "hdgst": ${hdgst:-false}, 00:10:21.080 "ddgst": ${ddgst:-false} 00:10:21.080 }, 00:10:21.080 "method": "bdev_nvme_attach_controller" 00:10:21.080 } 00:10:21.080 EOF 00:10:21.080 )") 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:21.080 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:21.081 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:21.081 "params": { 00:10:21.081 "name": "Nvme0", 00:10:21.081 "trtype": "tcp", 00:10:21.081 "traddr": "10.0.0.3", 00:10:21.081 "adrfam": "ipv4", 00:10:21.081 "trsvcid": "4420", 00:10:21.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:21.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:21.081 "hdgst": false, 00:10:21.081 "ddgst": false 00:10:21.081 }, 00:10:21.081 "method": "bdev_nvme_attach_controller" 00:10:21.081 }' 00:10:21.339 [2024-12-13 09:12:15.061238] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:21.339 [2024-12-13 09:12:15.061417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66552 ] 00:10:21.597 [2024-12-13 09:12:15.249624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.597 [2024-12-13 09:12:15.373504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.854 [2024-12-13 09:12:15.577077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:22.111 Running I/O for 10 seconds... 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.371 09:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:22.371 [2024-12-13 09:12:16.173132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.371 [2024-12-13 09:12:16.173600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.371 [2024-12-13 09:12:16.173614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.173976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.173989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.372 [2024-12-13 09:12:16.174830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.372 [2024-12-13 09:12:16.174845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.373 [2024-12-13 09:12:16.174858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.174873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.373 [2024-12-13 09:12:16.174886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.174901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.373 [2024-12-13 09:12:16.174914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.174929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.373 [2024-12-13 09:12:16.174942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.174956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.373 [2024-12-13 09:12:16.174969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.174984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.373 [2024-12-13 09:12:16.174997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.175012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.373 [2024-12-13 09:12:16.175025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.175040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.373 [2024-12-13 09:12:16.175053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.175068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.373 [2024-12-13 09:12:16.175091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.175108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:22.373 [2024-12-13 09:12:16.175122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.175136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:10:22.373 task offset: 65536 on job bdev=Nvme0n1 fails 00:10:22.373 00:10:22.373 Latency(us) 00:10:22.373 [2024-12-13T09:12:16.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.373 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:22.373 Job: Nvme0n1 ended in about 0.40 seconds with error 00:10:22.373 Verification LBA range: start 0x0 length 0x400 00:10:22.373 Nvme0n1 : 0.40 1271.00 79.44 158.87 0.00 43228.48 3008.70 42419.67 00:10:22.373 [2024-12-13T09:12:16.263Z] =================================================================================================================== 00:10:22.373 [2024-12-13T09:12:16.263Z] Total : 1271.00 79.44 158.87 0.00 43228.48 3008.70 42419.67 00:10:22.373 [2024-12-13 09:12:16.175584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.373 [2024-12-13 09:12:16.175615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.175633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.373 [2024-12-13 09:12:16.175646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.175660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.373 [2024-12-13 09:12:16.175673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.175687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:22.373 [2024-12-13 09:12:16.175700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.373 [2024-12-13 09:12:16.175712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:10:22.373 [2024-12-13 09:12:16.176932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:22.373 [2024-12-13 09:12:16.182014] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:22.373 [2024-12-13 09:12:16.182064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:10:22.373 [2024-12-13 09:12:16.195886] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 66552 00:10:23.307 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (66552) - No such process 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:23.307 { 00:10:23.307 "params": { 00:10:23.307 "name": "Nvme$subsystem", 00:10:23.307 "trtype": "$TEST_TRANSPORT", 00:10:23.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:23.307 "adrfam": "ipv4", 00:10:23.307 "trsvcid": "$NVMF_PORT", 00:10:23.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:23.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:23.307 "hdgst": ${hdgst:-false}, 00:10:23.307 "ddgst": ${ddgst:-false} 00:10:23.307 }, 00:10:23.307 "method": "bdev_nvme_attach_controller" 00:10:23.307 } 00:10:23.307 EOF 00:10:23.307 )") 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:23.307 09:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:23.307 "params": { 00:10:23.307 "name": "Nvme0", 00:10:23.307 "trtype": "tcp", 00:10:23.307 "traddr": "10.0.0.3", 00:10:23.307 "adrfam": "ipv4", 00:10:23.307 "trsvcid": "4420", 00:10:23.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:23.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:23.307 "hdgst": false, 00:10:23.307 "ddgst": false 00:10:23.307 }, 00:10:23.307 "method": "bdev_nvme_attach_controller" 00:10:23.307 }' 00:10:23.565 [2024-12-13 09:12:17.280120] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:23.565 [2024-12-13 09:12:17.280320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66592 ] 00:10:23.822 [2024-12-13 09:12:17.468707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.822 [2024-12-13 09:12:17.594079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.079 [2024-12-13 09:12:17.812578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:24.336 Running I/O for 1 seconds... 00:10:25.268 1344.00 IOPS, 84.00 MiB/s 00:10:25.268 Latency(us) 00:10:25.268 [2024-12-13T09:12:19.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.268 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:25.268 Verification LBA range: start 0x0 length 0x400 00:10:25.268 Nvme0n1 : 1.03 1367.32 85.46 0.00 0.00 45927.15 5540.77 41228.10 00:10:25.268 [2024-12-13T09:12:19.158Z] =================================================================================================================== 00:10:25.268 [2024-12-13T09:12:19.158Z] Total : 1367.32 85.46 0.00 0.00 45927.15 5540.77 41228.10 00:10:26.201 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:26.201 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:26.201 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:26.201 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:26.201 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:26.201 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.201 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.459 rmmod nvme_tcp 00:10:26.459 rmmod nvme_fabrics 00:10:26.459 rmmod nvme_keyring 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 66498 ']' 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 66498 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 66498 ']' 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 66498 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66498 00:10:26.459 killing process with pid 66498 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66498' 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 66498 00:10:26.459 09:12:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 66498 00:10:27.404 [2024-12-13 09:12:21.248001] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.677 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:27.936 00:10:27.936 real 0m8.619s 00:10:27.936 user 0m32.718s 00:10:27.936 sys 0m1.763s 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:27.936 ************************************ 00:10:27.936 END TEST nvmf_host_management 00:10:27.936 ************************************ 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.936 ************************************ 00:10:27.936 START TEST nvmf_lvol 00:10:27.936 ************************************ 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:27.936 * Looking for test storage... 00:10:27.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:27.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.936 --rc genhtml_branch_coverage=1 00:10:27.936 --rc genhtml_function_coverage=1 00:10:27.936 --rc genhtml_legend=1 00:10:27.936 --rc geninfo_all_blocks=1 00:10:27.936 --rc geninfo_unexecuted_blocks=1 00:10:27.936 00:10:27.936 ' 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:27.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.936 --rc genhtml_branch_coverage=1 00:10:27.936 --rc genhtml_function_coverage=1 00:10:27.936 --rc genhtml_legend=1 00:10:27.936 --rc geninfo_all_blocks=1 00:10:27.936 --rc geninfo_unexecuted_blocks=1 00:10:27.936 00:10:27.936 ' 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:27.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.936 --rc genhtml_branch_coverage=1 00:10:27.936 --rc genhtml_function_coverage=1 00:10:27.936 --rc genhtml_legend=1 00:10:27.936 --rc geninfo_all_blocks=1 00:10:27.936 --rc geninfo_unexecuted_blocks=1 00:10:27.936 00:10:27.936 ' 00:10:27.936 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:27.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.937 --rc genhtml_branch_coverage=1 00:10:27.937 --rc genhtml_function_coverage=1 00:10:27.937 --rc genhtml_legend=1 00:10:27.937 --rc geninfo_all_blocks=1 00:10:27.937 --rc geninfo_unexecuted_blocks=1 00:10:27.937 00:10:27.937 ' 00:10:27.937 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:27.937 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.195 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.196 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:28.196 Cannot find device "nvmf_init_br" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:28.196 Cannot find device "nvmf_init_br2" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:28.196 Cannot find device "nvmf_tgt_br" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:28.196 Cannot find device "nvmf_tgt_br2" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:28.196 Cannot find device "nvmf_init_br" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:28.196 Cannot find device "nvmf_init_br2" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:28.196 Cannot find device "nvmf_tgt_br" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:28.196 Cannot find device "nvmf_tgt_br2" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:28.196 Cannot find device "nvmf_br" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:28.196 Cannot find device "nvmf_init_if" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:28.196 Cannot find device "nvmf_init_if2" 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:28.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:28.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:28.196 09:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:28.196 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:28.196 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:28.196 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:28.196 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:28.196 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:28.196 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:28.196 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:28.196 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:28.196 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:28.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:28.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:10:28.454 00:10:28.454 --- 10.0.0.3 ping statistics --- 00:10:28.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.454 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:28.454 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:28.454 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:10:28.454 00:10:28.454 --- 10.0.0.4 ping statistics --- 00:10:28.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.454 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:28.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:28.454 00:10:28.454 --- 10.0.0.1 ping statistics --- 00:10:28.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.454 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:28.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:28.454 00:10:28.454 --- 10.0.0.2 ping statistics --- 00:10:28.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.454 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=66894 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 66894 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 66894 ']' 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.454 09:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:28.712 [2024-12-13 09:12:22.382326] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:28.712 [2024-12-13 09:12:22.382474] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.712 [2024-12-13 09:12:22.559262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:28.970 [2024-12-13 09:12:22.676377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.970 [2024-12-13 09:12:22.676442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.970 [2024-12-13 09:12:22.676462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.970 [2024-12-13 09:12:22.676475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.970 [2024-12-13 09:12:22.676491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.970 [2024-12-13 09:12:22.678263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.970 [2024-12-13 09:12:22.678362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.970 [2024-12-13 09:12:22.678378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.970 [2024-12-13 09:12:22.856949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:29.537 09:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.537 09:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:10:29.537 09:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:29.537 09:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:29.537 09:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:29.537 09:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.537 09:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:29.795 [2024-12-13 09:12:23.644996] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.795 09:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.360 09:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:30.360 09:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.618 09:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:30.618 09:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:30.876 09:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:31.135 09:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d1d2d5d4-44f3-4b58-9fc4-df01c42e87d4 00:10:31.135 09:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d1d2d5d4-44f3-4b58-9fc4-df01c42e87d4 lvol 20 00:10:31.401 09:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=041cd3bb-087f-4da2-bb73-53e09147a7d3 00:10:31.401 09:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:31.663 09:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 041cd3bb-087f-4da2-bb73-53e09147a7d3 00:10:31.921 09:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:32.179 [2024-12-13 09:12:25.915169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:32.179 09:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:32.437 09:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:32.437 09:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=66974 00:10:32.437 09:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:33.372 09:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 041cd3bb-087f-4da2-bb73-53e09147a7d3 MY_SNAPSHOT 00:10:33.937 09:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5d0957b2-36af-414f-8183-bed2abd0c5d7 00:10:33.937 09:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 041cd3bb-087f-4da2-bb73-53e09147a7d3 30 00:10:34.195 09:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5d0957b2-36af-414f-8183-bed2abd0c5d7 MY_CLONE 00:10:34.453 09:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1cf49754-8500-4b94-9d6a-fb5d0c0f8982 00:10:34.453 09:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 1cf49754-8500-4b94-9d6a-fb5d0c0f8982 00:10:35.019 09:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 66974 00:10:43.129 Initializing NVMe Controllers 00:10:43.129 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:43.129 Controller IO queue size 128, less than required. 00:10:43.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:43.129 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:43.129 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:43.129 Initialization complete. Launching workers. 00:10:43.129 ======================================================== 00:10:43.129 Latency(us) 00:10:43.129 Device Information : IOPS MiB/s Average min max 00:10:43.129 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8910.72 34.81 14380.22 241.73 169712.54 00:10:43.129 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8760.82 34.22 14623.13 3902.64 128167.20 00:10:43.129 ======================================================== 00:10:43.129 Total : 17671.53 69.03 14500.65 241.73 169712.54 00:10:43.129 00:10:43.129 09:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:43.129 09:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 041cd3bb-087f-4da2-bb73-53e09147a7d3 00:10:43.387 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d1d2d5d4-44f3-4b58-9fc4-df01c42e87d4 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.646 rmmod nvme_tcp 00:10:43.646 rmmod nvme_fabrics 00:10:43.646 rmmod nvme_keyring 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 66894 ']' 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 66894 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 66894 ']' 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 66894 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66894 00:10:43.646 killing process with pid 66894 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66894' 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 66894 00:10:43.646 09:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 66894 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:45.022 00:10:45.022 real 0m17.277s 00:10:45.022 user 1m8.852s 00:10:45.022 sys 0m4.327s 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.022 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:45.022 ************************************ 00:10:45.022 END TEST nvmf_lvol 00:10:45.022 ************************************ 00:10:45.281 09:12:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:45.281 09:12:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.281 09:12:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.281 09:12:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.281 ************************************ 00:10:45.281 START TEST nvmf_lvs_grow 00:10:45.281 ************************************ 00:10:45.281 09:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:45.281 * Looking for test storage... 00:10:45.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:45.281 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:45.281 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:10:45.281 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:45.281 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:45.281 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.281 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.281 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.281 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.282 --rc genhtml_branch_coverage=1 00:10:45.282 --rc genhtml_function_coverage=1 00:10:45.282 --rc genhtml_legend=1 00:10:45.282 --rc geninfo_all_blocks=1 00:10:45.282 --rc geninfo_unexecuted_blocks=1 00:10:45.282 00:10:45.282 ' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.282 --rc genhtml_branch_coverage=1 00:10:45.282 --rc genhtml_function_coverage=1 00:10:45.282 --rc genhtml_legend=1 00:10:45.282 --rc geninfo_all_blocks=1 00:10:45.282 --rc geninfo_unexecuted_blocks=1 00:10:45.282 00:10:45.282 ' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.282 --rc genhtml_branch_coverage=1 00:10:45.282 --rc genhtml_function_coverage=1 00:10:45.282 --rc genhtml_legend=1 00:10:45.282 --rc geninfo_all_blocks=1 00:10:45.282 --rc geninfo_unexecuted_blocks=1 00:10:45.282 00:10:45.282 ' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.282 --rc genhtml_branch_coverage=1 00:10:45.282 --rc genhtml_function_coverage=1 00:10:45.282 --rc genhtml_legend=1 00:10:45.282 --rc geninfo_all_blocks=1 00:10:45.282 --rc geninfo_unexecuted_blocks=1 00:10:45.282 00:10:45.282 ' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.282 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:45.282 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.283 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:45.541 Cannot find device "nvmf_init_br" 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:45.541 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:45.541 Cannot find device "nvmf_init_br2" 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:45.542 Cannot find device "nvmf_tgt_br" 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:45.542 Cannot find device "nvmf_tgt_br2" 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:45.542 Cannot find device "nvmf_init_br" 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:45.542 Cannot find device "nvmf_init_br2" 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:45.542 Cannot find device "nvmf_tgt_br" 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:45.542 Cannot find device "nvmf_tgt_br2" 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:45.542 Cannot find device "nvmf_br" 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:45.542 Cannot find device "nvmf_init_if" 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:45.542 Cannot find device "nvmf_init_if2" 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:45.542 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:45.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:45.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:45.801 00:10:45.801 --- 10.0.0.3 ping statistics --- 00:10:45.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.801 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:45.801 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:45.801 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:10:45.801 00:10:45.801 --- 10.0.0.4 ping statistics --- 00:10:45.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.801 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:45.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:45.801 00:10:45.801 --- 10.0.0.1 ping statistics --- 00:10:45.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.801 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:45.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:10:45.801 00:10:45.801 --- 10.0.0.2 ping statistics --- 00:10:45.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.801 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=67370 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 67370 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 67370 ']' 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.801 09:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:46.060 [2024-12-13 09:12:39.713273] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:46.060 [2024-12-13 09:12:39.713478] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.060 [2024-12-13 09:12:39.897352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.319 [2024-12-13 09:12:39.988627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.319 [2024-12-13 09:12:39.988756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.319 [2024-12-13 09:12:39.988789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.319 [2024-12-13 09:12:39.988810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.319 [2024-12-13 09:12:39.988823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.319 [2024-12-13 09:12:39.990107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.319 [2024-12-13 09:12:40.166924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.887 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.887 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:46.887 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.887 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.887 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:46.887 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.887 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:47.146 [2024-12-13 09:12:40.942302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:47.146 ************************************ 00:10:47.146 START TEST lvs_grow_clean 00:10:47.146 ************************************ 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:47.146 09:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:47.741 09:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:47.741 09:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:47.741 09:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bc272db2-62b6-49ad-8743-4d73296ddc8a 00:10:47.741 09:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:10:47.741 09:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:47.999 09:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:47.999 09:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:47.999 09:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc272db2-62b6-49ad-8743-4d73296ddc8a lvol 150 00:10:48.567 09:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=25eadb2d-8993-4b19-96ae-dd043b45c07c 00:10:48.567 09:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:48.567 09:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:48.567 [2024-12-13 09:12:42.425731] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:48.567 [2024-12-13 09:12:42.425875] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:48.567 true 00:10:48.567 09:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:10:48.567 09:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:49.135 09:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:49.135 09:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:49.135 09:12:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 25eadb2d-8993-4b19-96ae-dd043b45c07c 00:10:49.394 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:49.653 [2024-12-13 09:12:43.502601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:49.653 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:49.912 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67458 00:10:49.912 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:49.912 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67458 /var/tmp/bdevperf.sock 00:10:49.912 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 67458 ']' 00:10:49.912 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:49.912 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:49.912 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:49.912 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.912 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:49.912 09:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:50.171 [2024-12-13 09:12:43.885173] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:50.171 [2024-12-13 09:12:43.885377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67458 ] 00:10:50.430 [2024-12-13 09:12:44.068821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.430 [2024-12-13 09:12:44.164595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.688 [2024-12-13 09:12:44.330725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.255 09:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.255 09:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:51.255 09:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:51.513 Nvme0n1 00:10:51.513 09:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:51.514 [ 00:10:51.514 { 00:10:51.514 "name": "Nvme0n1", 00:10:51.514 "aliases": [ 00:10:51.514 "25eadb2d-8993-4b19-96ae-dd043b45c07c" 00:10:51.514 ], 00:10:51.514 "product_name": "NVMe disk", 00:10:51.514 "block_size": 4096, 00:10:51.514 "num_blocks": 38912, 00:10:51.514 "uuid": "25eadb2d-8993-4b19-96ae-dd043b45c07c", 00:10:51.514 "numa_id": -1, 00:10:51.514 "assigned_rate_limits": { 00:10:51.514 "rw_ios_per_sec": 0, 00:10:51.514 "rw_mbytes_per_sec": 0, 00:10:51.514 "r_mbytes_per_sec": 0, 00:10:51.514 "w_mbytes_per_sec": 0 00:10:51.514 }, 00:10:51.514 "claimed": false, 00:10:51.514 "zoned": false, 00:10:51.514 "supported_io_types": { 00:10:51.514 "read": true, 00:10:51.514 "write": true, 00:10:51.514 "unmap": true, 00:10:51.514 "flush": true, 00:10:51.514 "reset": true, 00:10:51.514 "nvme_admin": true, 00:10:51.514 "nvme_io": true, 00:10:51.514 "nvme_io_md": false, 00:10:51.514 "write_zeroes": true, 00:10:51.514 "zcopy": false, 00:10:51.514 "get_zone_info": false, 00:10:51.514 "zone_management": false, 00:10:51.514 "zone_append": false, 00:10:51.514 "compare": true, 00:10:51.514 "compare_and_write": true, 00:10:51.514 "abort": true, 00:10:51.514 "seek_hole": false, 00:10:51.514 "seek_data": false, 00:10:51.514 "copy": true, 00:10:51.514 "nvme_iov_md": false 00:10:51.514 }, 00:10:51.514 "memory_domains": [ 00:10:51.514 { 00:10:51.514 "dma_device_id": "system", 00:10:51.514 "dma_device_type": 1 00:10:51.514 } 00:10:51.514 ], 00:10:51.514 "driver_specific": { 00:10:51.514 "nvme": [ 00:10:51.514 { 00:10:51.514 "trid": { 00:10:51.514 "trtype": "TCP", 00:10:51.514 "adrfam": "IPv4", 00:10:51.514 "traddr": "10.0.0.3", 00:10:51.514 "trsvcid": "4420", 00:10:51.514 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:51.514 }, 00:10:51.514 "ctrlr_data": { 00:10:51.514 "cntlid": 1, 00:10:51.514 "vendor_id": "0x8086", 00:10:51.514 "model_number": "SPDK bdev Controller", 00:10:51.514 "serial_number": "SPDK0", 00:10:51.514 "firmware_revision": "25.01", 00:10:51.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:51.514 "oacs": { 00:10:51.514 "security": 0, 00:10:51.514 "format": 0, 00:10:51.514 "firmware": 0, 00:10:51.514 "ns_manage": 0 00:10:51.514 }, 00:10:51.514 "multi_ctrlr": true, 00:10:51.514 "ana_reporting": false 00:10:51.514 }, 00:10:51.514 "vs": { 00:10:51.514 "nvme_version": "1.3" 00:10:51.514 }, 00:10:51.514 "ns_data": { 00:10:51.514 "id": 1, 00:10:51.514 "can_share": true 00:10:51.514 } 00:10:51.514 } 00:10:51.514 ], 00:10:51.514 "mp_policy": "active_passive" 00:10:51.514 } 00:10:51.514 } 00:10:51.514 ] 00:10:51.772 09:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67483 00:10:51.772 09:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:51.772 09:12:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:51.772 Running I/O for 10 seconds... 00:10:52.710 Latency(us) 00:10:52.710 [2024-12-13T09:12:46.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.710 Nvme0n1 : 1.00 5715.00 22.32 0.00 0.00 0.00 0.00 0.00 00:10:52.710 [2024-12-13T09:12:46.600Z] =================================================================================================================== 00:10:52.710 [2024-12-13T09:12:46.600Z] Total : 5715.00 22.32 0.00 0.00 0.00 0.00 0.00 00:10:52.710 00:10:53.647 09:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:10:53.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.647 Nvme0n1 : 2.00 5651.50 22.08 0.00 0.00 0.00 0.00 0.00 00:10:53.647 [2024-12-13T09:12:47.537Z] =================================================================================================================== 00:10:53.647 [2024-12-13T09:12:47.537Z] Total : 5651.50 22.08 0.00 0.00 0.00 0.00 0.00 00:10:53.647 00:10:53.906 true 00:10:53.906 09:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:53.906 09:12:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:10:54.474 09:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:54.474 09:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:54.474 09:12:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 67483 00:10:54.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.732 Nvme0n1 : 3.00 5589.67 21.83 0.00 0.00 0.00 0.00 0.00 00:10:54.732 [2024-12-13T09:12:48.622Z] =================================================================================================================== 00:10:54.732 [2024-12-13T09:12:48.622Z] Total : 5589.67 21.83 0.00 0.00 0.00 0.00 0.00 00:10:54.732 00:10:55.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.669 Nvme0n1 : 4.00 5621.00 21.96 0.00 0.00 0.00 0.00 0.00 00:10:55.669 [2024-12-13T09:12:49.559Z] =================================================================================================================== 00:10:55.669 [2024-12-13T09:12:49.559Z] Total : 5621.00 21.96 0.00 0.00 0.00 0.00 0.00 00:10:55.669 00:10:57.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.045 Nvme0n1 : 5.00 5614.40 21.93 0.00 0.00 0.00 0.00 0.00 00:10:57.045 [2024-12-13T09:12:50.935Z] =================================================================================================================== 00:10:57.045 [2024-12-13T09:12:50.935Z] Total : 5614.40 21.93 0.00 0.00 0.00 0.00 0.00 00:10:57.045 00:10:57.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.982 Nvme0n1 : 6.00 5631.17 22.00 0.00 0.00 0.00 0.00 0.00 00:10:57.982 [2024-12-13T09:12:51.872Z] =================================================================================================================== 00:10:57.982 [2024-12-13T09:12:51.872Z] Total : 5631.17 22.00 0.00 0.00 0.00 0.00 0.00 00:10:57.982 00:10:58.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.916 Nvme0n1 : 7.00 5625.00 21.97 0.00 0.00 0.00 0.00 0.00 00:10:58.916 [2024-12-13T09:12:52.806Z] =================================================================================================================== 00:10:58.916 [2024-12-13T09:12:52.806Z] Total : 5625.00 21.97 0.00 0.00 0.00 0.00 0.00 00:10:58.916 00:10:59.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.851 Nvme0n1 : 8.00 5604.50 21.89 0.00 0.00 0.00 0.00 0.00 00:10:59.851 [2024-12-13T09:12:53.741Z] =================================================================================================================== 00:10:59.851 [2024-12-13T09:12:53.741Z] Total : 5604.50 21.89 0.00 0.00 0.00 0.00 0.00 00:10:59.851 00:11:00.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.787 Nvme0n1 : 9.00 5602.67 21.89 0.00 0.00 0.00 0.00 0.00 00:11:00.787 [2024-12-13T09:12:54.677Z] =================================================================================================================== 00:11:00.787 [2024-12-13T09:12:54.677Z] Total : 5602.67 21.89 0.00 0.00 0.00 0.00 0.00 00:11:00.787 00:11:01.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.734 Nvme0n1 : 10.00 5588.50 21.83 0.00 0.00 0.00 0.00 0.00 00:11:01.734 [2024-12-13T09:12:55.624Z] =================================================================================================================== 00:11:01.734 [2024-12-13T09:12:55.624Z] Total : 5588.50 21.83 0.00 0.00 0.00 0.00 0.00 00:11:01.734 00:11:01.734 00:11:01.734 Latency(us) 00:11:01.734 [2024-12-13T09:12:55.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.734 Nvme0n1 : 10.02 5589.75 21.83 0.00 0.00 22891.93 13822.14 59816.49 00:11:01.734 [2024-12-13T09:12:55.624Z] =================================================================================================================== 00:11:01.734 [2024-12-13T09:12:55.624Z] Total : 5589.75 21.83 0.00 0.00 22891.93 13822.14 59816.49 00:11:01.734 { 00:11:01.734 "results": [ 00:11:01.734 { 00:11:01.734 "job": "Nvme0n1", 00:11:01.734 "core_mask": "0x2", 00:11:01.734 "workload": "randwrite", 00:11:01.734 "status": "finished", 00:11:01.734 "queue_depth": 128, 00:11:01.734 "io_size": 4096, 00:11:01.734 "runtime": 10.020665, 00:11:01.734 "iops": 5589.74878413758, 00:11:01.734 "mibps": 21.83495618803742, 00:11:01.734 "io_failed": 0, 00:11:01.734 "io_timeout": 0, 00:11:01.734 "avg_latency_us": 22891.932222227635, 00:11:01.734 "min_latency_us": 13822.138181818182, 00:11:01.734 "max_latency_us": 59816.494545454545 00:11:01.734 } 00:11:01.734 ], 00:11:01.734 "core_count": 1 00:11:01.734 } 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67458 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 67458 ']' 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 67458 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67458 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:01.734 killing process with pid 67458 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67458' 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 67458 00:11:01.734 Received shutdown signal, test time was about 10.000000 seconds 00:11:01.734 00:11:01.734 Latency(us) 00:11:01.734 [2024-12-13T09:12:55.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.734 [2024-12-13T09:12:55.624Z] =================================================================================================================== 00:11:01.734 [2024-12-13T09:12:55.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:01.734 09:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 67458 00:11:02.704 09:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:02.963 09:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:03.222 09:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:03.222 09:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:11:03.481 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:03.481 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:03.481 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:03.740 [2024-12-13 09:12:57.483261] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:03.740 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:11:03.998 request: 00:11:03.998 { 00:11:03.998 "uuid": "bc272db2-62b6-49ad-8743-4d73296ddc8a", 00:11:03.998 "method": "bdev_lvol_get_lvstores", 00:11:03.998 "req_id": 1 00:11:03.998 } 00:11:03.998 Got JSON-RPC error response 00:11:03.998 response: 00:11:03.998 { 00:11:03.998 "code": -19, 00:11:03.998 "message": "No such device" 00:11:03.998 } 00:11:03.998 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:11:03.998 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:03.998 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:03.998 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:03.998 09:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:04.257 aio_bdev 00:11:04.257 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 25eadb2d-8993-4b19-96ae-dd043b45c07c 00:11:04.257 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=25eadb2d-8993-4b19-96ae-dd043b45c07c 00:11:04.257 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.257 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:11:04.257 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.257 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.257 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:04.516 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 25eadb2d-8993-4b19-96ae-dd043b45c07c -t 2000 00:11:04.774 [ 00:11:04.774 { 00:11:04.774 "name": "25eadb2d-8993-4b19-96ae-dd043b45c07c", 00:11:04.774 "aliases": [ 00:11:04.774 "lvs/lvol" 00:11:04.774 ], 00:11:04.774 "product_name": "Logical Volume", 00:11:04.774 "block_size": 4096, 00:11:04.774 "num_blocks": 38912, 00:11:04.774 "uuid": "25eadb2d-8993-4b19-96ae-dd043b45c07c", 00:11:04.774 "assigned_rate_limits": { 00:11:04.774 "rw_ios_per_sec": 0, 00:11:04.774 "rw_mbytes_per_sec": 0, 00:11:04.774 "r_mbytes_per_sec": 0, 00:11:04.774 "w_mbytes_per_sec": 0 00:11:04.774 }, 00:11:04.774 "claimed": false, 00:11:04.774 "zoned": false, 00:11:04.774 "supported_io_types": { 00:11:04.774 "read": true, 00:11:04.774 "write": true, 00:11:04.775 "unmap": true, 00:11:04.775 "flush": false, 00:11:04.775 "reset": true, 00:11:04.775 "nvme_admin": false, 00:11:04.775 "nvme_io": false, 00:11:04.775 "nvme_io_md": false, 00:11:04.775 "write_zeroes": true, 00:11:04.775 "zcopy": false, 00:11:04.775 "get_zone_info": false, 00:11:04.775 "zone_management": false, 00:11:04.775 "zone_append": false, 00:11:04.775 "compare": false, 00:11:04.775 "compare_and_write": false, 00:11:04.775 "abort": false, 00:11:04.775 "seek_hole": true, 00:11:04.775 "seek_data": true, 00:11:04.775 "copy": false, 00:11:04.775 "nvme_iov_md": false 00:11:04.775 }, 00:11:04.775 "driver_specific": { 00:11:04.775 "lvol": { 00:11:04.775 "lvol_store_uuid": "bc272db2-62b6-49ad-8743-4d73296ddc8a", 00:11:04.775 "base_bdev": "aio_bdev", 00:11:04.775 "thin_provision": false, 00:11:04.775 "num_allocated_clusters": 38, 00:11:04.775 "snapshot": false, 00:11:04.775 "clone": false, 00:11:04.775 "esnap_clone": false 00:11:04.775 } 00:11:04.775 } 00:11:04.775 } 00:11:04.775 ] 00:11:04.775 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:11:04.775 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:04.775 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:11:05.033 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:05.033 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:05.033 09:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:11:05.292 09:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:05.292 09:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 25eadb2d-8993-4b19-96ae-dd043b45c07c 00:11:05.550 09:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc272db2-62b6-49ad-8743-4d73296ddc8a 00:11:05.808 09:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:06.067 09:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:06.635 00:11:06.635 real 0m19.250s 00:11:06.635 user 0m18.230s 00:11:06.635 sys 0m2.553s 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:06.635 ************************************ 00:11:06.635 END TEST lvs_grow_clean 00:11:06.635 ************************************ 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:06.635 ************************************ 00:11:06.635 START TEST lvs_grow_dirty 00:11:06.635 ************************************ 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:06.635 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:06.894 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:06.894 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:07.152 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=018b7c37-6598-4bfb-b430-1c198067099a 00:11:07.152 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:07.152 09:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:07.410 09:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:07.410 09:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:07.410 09:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 018b7c37-6598-4bfb-b430-1c198067099a lvol 150 00:11:07.668 09:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=986b8843-9a9d-4597-8780-8a824f43e795 00:11:07.669 09:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:07.669 09:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:07.927 [2024-12-13 09:13:01.589583] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:07.927 [2024-12-13 09:13:01.589707] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:07.927 true 00:11:07.927 09:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:07.927 09:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:08.185 09:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:08.185 09:13:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:08.444 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 986b8843-9a9d-4597-8780-8a824f43e795 00:11:08.703 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:08.704 [2024-12-13 09:13:02.562312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:08.704 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:09.271 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67740 00:11:09.271 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:09.271 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:09.271 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67740 /var/tmp/bdevperf.sock 00:11:09.271 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67740 ']' 00:11:09.271 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:09.271 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:09.271 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:09.271 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.271 09:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:09.271 [2024-12-13 09:13:02.964053] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:09.271 [2024-12-13 09:13:02.964216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67740 ] 00:11:09.271 [2024-12-13 09:13:03.136958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.529 [2024-12-13 09:13:03.259595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.787 [2024-12-13 09:13:03.429049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.045 09:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.045 09:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:10.045 09:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:10.303 Nvme0n1 00:11:10.562 09:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:10.820 [ 00:11:10.820 { 00:11:10.820 "name": "Nvme0n1", 00:11:10.820 "aliases": [ 00:11:10.820 "986b8843-9a9d-4597-8780-8a824f43e795" 00:11:10.820 ], 00:11:10.820 "product_name": "NVMe disk", 00:11:10.820 "block_size": 4096, 00:11:10.820 "num_blocks": 38912, 00:11:10.820 "uuid": "986b8843-9a9d-4597-8780-8a824f43e795", 00:11:10.820 "numa_id": -1, 00:11:10.820 "assigned_rate_limits": { 00:11:10.820 "rw_ios_per_sec": 0, 00:11:10.820 "rw_mbytes_per_sec": 0, 00:11:10.820 "r_mbytes_per_sec": 0, 00:11:10.820 "w_mbytes_per_sec": 0 00:11:10.820 }, 00:11:10.820 "claimed": false, 00:11:10.820 "zoned": false, 00:11:10.820 "supported_io_types": { 00:11:10.820 "read": true, 00:11:10.820 "write": true, 00:11:10.820 "unmap": true, 00:11:10.820 "flush": true, 00:11:10.820 "reset": true, 00:11:10.820 "nvme_admin": true, 00:11:10.820 "nvme_io": true, 00:11:10.820 "nvme_io_md": false, 00:11:10.820 "write_zeroes": true, 00:11:10.821 "zcopy": false, 00:11:10.821 "get_zone_info": false, 00:11:10.821 "zone_management": false, 00:11:10.821 "zone_append": false, 00:11:10.821 "compare": true, 00:11:10.821 "compare_and_write": true, 00:11:10.821 "abort": true, 00:11:10.821 "seek_hole": false, 00:11:10.821 "seek_data": false, 00:11:10.821 "copy": true, 00:11:10.821 "nvme_iov_md": false 00:11:10.821 }, 00:11:10.821 "memory_domains": [ 00:11:10.821 { 00:11:10.821 "dma_device_id": "system", 00:11:10.821 "dma_device_type": 1 00:11:10.821 } 00:11:10.821 ], 00:11:10.821 "driver_specific": { 00:11:10.821 "nvme": [ 00:11:10.821 { 00:11:10.821 "trid": { 00:11:10.821 "trtype": "TCP", 00:11:10.821 "adrfam": "IPv4", 00:11:10.821 "traddr": "10.0.0.3", 00:11:10.821 "trsvcid": "4420", 00:11:10.821 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:10.821 }, 00:11:10.821 "ctrlr_data": { 00:11:10.821 "cntlid": 1, 00:11:10.821 "vendor_id": "0x8086", 00:11:10.821 "model_number": "SPDK bdev Controller", 00:11:10.821 "serial_number": "SPDK0", 00:11:10.821 "firmware_revision": "25.01", 00:11:10.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:10.821 "oacs": { 00:11:10.821 "security": 0, 00:11:10.821 "format": 0, 00:11:10.821 "firmware": 0, 00:11:10.821 "ns_manage": 0 00:11:10.821 }, 00:11:10.821 "multi_ctrlr": true, 00:11:10.821 "ana_reporting": false 00:11:10.821 }, 00:11:10.821 "vs": { 00:11:10.821 "nvme_version": "1.3" 00:11:10.821 }, 00:11:10.821 "ns_data": { 00:11:10.821 "id": 1, 00:11:10.821 "can_share": true 00:11:10.821 } 00:11:10.821 } 00:11:10.821 ], 00:11:10.821 "mp_policy": "active_passive" 00:11:10.821 } 00:11:10.821 } 00:11:10.821 ] 00:11:10.821 09:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67769 00:11:10.821 09:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:10.821 09:13:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:10.821 Running I/O for 10 seconds... 00:11:12.194 Latency(us) 00:11:12.194 [2024-12-13T09:13:06.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.194 Nvme0n1 : 1.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:12.194 [2024-12-13T09:13:06.084Z] =================================================================================================================== 00:11:12.194 [2024-12-13T09:13:06.084Z] Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:12.194 00:11:12.761 09:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:13.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.019 Nvme0n1 : 2.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:13.019 [2024-12-13T09:13:06.909Z] =================================================================================================================== 00:11:13.019 [2024-12-13T09:13:06.909Z] Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:13.019 00:11:13.019 true 00:11:13.019 09:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:13.019 09:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:13.614 09:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:13.614 09:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:13.614 09:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67769 00:11:13.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.872 Nvme0n1 : 3.00 5545.67 21.66 0.00 0.00 0.00 0.00 0.00 00:11:13.872 [2024-12-13T09:13:07.762Z] =================================================================================================================== 00:11:13.872 [2024-12-13T09:13:07.762Z] Total : 5545.67 21.66 0.00 0.00 0.00 0.00 0.00 00:11:13.872 00:11:14.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.809 Nvme0n1 : 4.00 5455.00 21.31 0.00 0.00 0.00 0.00 0.00 00:11:14.809 [2024-12-13T09:13:08.699Z] =================================================================================================================== 00:11:14.809 [2024-12-13T09:13:08.699Z] Total : 5455.00 21.31 0.00 0.00 0.00 0.00 0.00 00:11:14.809 00:11:16.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.184 Nvme0n1 : 5.00 5456.20 21.31 0.00 0.00 0.00 0.00 0.00 00:11:16.184 [2024-12-13T09:13:10.074Z] =================================================================================================================== 00:11:16.184 [2024-12-13T09:13:10.074Z] Total : 5456.20 21.31 0.00 0.00 0.00 0.00 0.00 00:11:16.184 00:11:17.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.119 Nvme0n1 : 6.00 5478.17 21.40 0.00 0.00 0.00 0.00 0.00 00:11:17.119 [2024-12-13T09:13:11.009Z] =================================================================================================================== 00:11:17.119 [2024-12-13T09:13:11.009Z] Total : 5478.17 21.40 0.00 0.00 0.00 0.00 0.00 00:11:17.119 00:11:18.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.055 Nvme0n1 : 7.00 5475.71 21.39 0.00 0.00 0.00 0.00 0.00 00:11:18.055 [2024-12-13T09:13:11.945Z] =================================================================================================================== 00:11:18.055 [2024-12-13T09:13:11.945Z] Total : 5475.71 21.39 0.00 0.00 0.00 0.00 0.00 00:11:18.055 00:11:18.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.990 Nvme0n1 : 8.00 5489.75 21.44 0.00 0.00 0.00 0.00 0.00 00:11:18.990 [2024-12-13T09:13:12.880Z] =================================================================================================================== 00:11:18.990 [2024-12-13T09:13:12.880Z] Total : 5489.75 21.44 0.00 0.00 0.00 0.00 0.00 00:11:18.990 00:11:19.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.925 Nvme0n1 : 9.00 5500.67 21.49 0.00 0.00 0.00 0.00 0.00 00:11:19.925 [2024-12-13T09:13:13.815Z] =================================================================================================================== 00:11:19.925 [2024-12-13T09:13:13.815Z] Total : 5500.67 21.49 0.00 0.00 0.00 0.00 0.00 00:11:19.925 00:11:20.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.859 Nvme0n1 : 10.00 5496.70 21.47 0.00 0.00 0.00 0.00 0.00 00:11:20.859 [2024-12-13T09:13:14.749Z] =================================================================================================================== 00:11:20.859 [2024-12-13T09:13:14.749Z] Total : 5496.70 21.47 0.00 0.00 0.00 0.00 0.00 00:11:20.859 00:11:20.859 00:11:20.859 Latency(us) 00:11:20.859 [2024-12-13T09:13:14.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.859 Nvme0n1 : 10.01 5503.29 21.50 0.00 0.00 23252.38 19779.96 85792.58 00:11:20.859 [2024-12-13T09:13:14.749Z] =================================================================================================================== 00:11:20.859 [2024-12-13T09:13:14.749Z] Total : 5503.29 21.50 0.00 0.00 23252.38 19779.96 85792.58 00:11:20.859 { 00:11:20.859 "results": [ 00:11:20.859 { 00:11:20.859 "job": "Nvme0n1", 00:11:20.859 "core_mask": "0x2", 00:11:20.859 "workload": "randwrite", 00:11:20.859 "status": "finished", 00:11:20.859 "queue_depth": 128, 00:11:20.859 "io_size": 4096, 00:11:20.859 "runtime": 10.011284, 00:11:20.859 "iops": 5503.290087465304, 00:11:20.859 "mibps": 21.497226904161344, 00:11:20.859 "io_failed": 0, 00:11:20.859 "io_timeout": 0, 00:11:20.859 "avg_latency_us": 23252.377611827502, 00:11:20.859 "min_latency_us": 19779.956363636364, 00:11:20.859 "max_latency_us": 85792.58181818182 00:11:20.859 } 00:11:20.859 ], 00:11:20.859 "core_count": 1 00:11:20.859 } 00:11:20.859 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67740 00:11:20.859 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 67740 ']' 00:11:20.859 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 67740 00:11:20.859 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:11:20.859 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.859 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67740 00:11:21.118 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:21.118 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:21.118 killing process with pid 67740 00:11:21.118 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67740' 00:11:21.118 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 67740 00:11:21.118 Received shutdown signal, test time was about 10.000000 seconds 00:11:21.118 00:11:21.118 Latency(us) 00:11:21.118 [2024-12-13T09:13:15.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.118 [2024-12-13T09:13:15.008Z] =================================================================================================================== 00:11:21.118 [2024-12-13T09:13:15.008Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:21.118 09:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 67740 00:11:22.052 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:22.053 09:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:22.619 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:22.619 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:22.619 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:22.619 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:22.619 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 67370 00:11:22.619 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 67370 00:11:22.878 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 67370 Killed "${NVMF_APP[@]}" "$@" 00:11:22.878 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:22.878 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=67903 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 67903 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67903 ']' 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.879 09:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:22.879 [2024-12-13 09:13:16.619946] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:22.879 [2024-12-13 09:13:16.620679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.138 [2024-12-13 09:13:16.795273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.138 [2024-12-13 09:13:16.887234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.138 [2024-12-13 09:13:16.887318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.138 [2024-12-13 09:13:16.887354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.138 [2024-12-13 09:13:16.887375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.138 [2024-12-13 09:13:16.887388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.138 [2024-12-13 09:13:16.888528] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.397 [2024-12-13 09:13:17.038458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:23.964 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.964 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:23.964 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:23.964 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:23.964 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:23.964 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.964 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:24.223 [2024-12-13 09:13:17.919898] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:24.223 [2024-12-13 09:13:17.920340] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:24.223 [2024-12-13 09:13:17.920632] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:24.223 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:24.223 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 986b8843-9a9d-4597-8780-8a824f43e795 00:11:24.223 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=986b8843-9a9d-4597-8780-8a824f43e795 00:11:24.223 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.223 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:24.223 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.223 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.223 09:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:24.482 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 986b8843-9a9d-4597-8780-8a824f43e795 -t 2000 00:11:24.740 [ 00:11:24.740 { 00:11:24.740 "name": "986b8843-9a9d-4597-8780-8a824f43e795", 00:11:24.740 "aliases": [ 00:11:24.740 "lvs/lvol" 00:11:24.740 ], 00:11:24.740 "product_name": "Logical Volume", 00:11:24.740 "block_size": 4096, 00:11:24.740 "num_blocks": 38912, 00:11:24.740 "uuid": "986b8843-9a9d-4597-8780-8a824f43e795", 00:11:24.740 "assigned_rate_limits": { 00:11:24.740 "rw_ios_per_sec": 0, 00:11:24.740 "rw_mbytes_per_sec": 0, 00:11:24.740 "r_mbytes_per_sec": 0, 00:11:24.740 "w_mbytes_per_sec": 0 00:11:24.740 }, 00:11:24.740 "claimed": false, 00:11:24.740 "zoned": false, 00:11:24.740 "supported_io_types": { 00:11:24.740 "read": true, 00:11:24.740 "write": true, 00:11:24.740 "unmap": true, 00:11:24.740 "flush": false, 00:11:24.740 "reset": true, 00:11:24.740 "nvme_admin": false, 00:11:24.740 "nvme_io": false, 00:11:24.740 "nvme_io_md": false, 00:11:24.740 "write_zeroes": true, 00:11:24.740 "zcopy": false, 00:11:24.740 "get_zone_info": false, 00:11:24.740 "zone_management": false, 00:11:24.740 "zone_append": false, 00:11:24.740 "compare": false, 00:11:24.740 "compare_and_write": false, 00:11:24.740 "abort": false, 00:11:24.740 "seek_hole": true, 00:11:24.740 "seek_data": true, 00:11:24.740 "copy": false, 00:11:24.740 "nvme_iov_md": false 00:11:24.740 }, 00:11:24.740 "driver_specific": { 00:11:24.740 "lvol": { 00:11:24.740 "lvol_store_uuid": "018b7c37-6598-4bfb-b430-1c198067099a", 00:11:24.740 "base_bdev": "aio_bdev", 00:11:24.740 "thin_provision": false, 00:11:24.740 "num_allocated_clusters": 38, 00:11:24.740 "snapshot": false, 00:11:24.740 "clone": false, 00:11:24.740 "esnap_clone": false 00:11:24.740 } 00:11:24.740 } 00:11:24.740 } 00:11:24.740 ] 00:11:24.740 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:24.740 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:24.740 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:24.999 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:24.999 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:24.999 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:25.257 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:25.257 09:13:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:25.516 [2024-12-13 09:13:19.165417] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:25.516 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:25.775 request: 00:11:25.775 { 00:11:25.775 "uuid": "018b7c37-6598-4bfb-b430-1c198067099a", 00:11:25.775 "method": "bdev_lvol_get_lvstores", 00:11:25.775 "req_id": 1 00:11:25.775 } 00:11:25.775 Got JSON-RPC error response 00:11:25.775 response: 00:11:25.775 { 00:11:25.775 "code": -19, 00:11:25.775 "message": "No such device" 00:11:25.775 } 00:11:25.775 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:11:25.775 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:25.775 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:25.775 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:25.775 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:26.036 aio_bdev 00:11:26.036 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 986b8843-9a9d-4597-8780-8a824f43e795 00:11:26.036 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=986b8843-9a9d-4597-8780-8a824f43e795 00:11:26.036 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.036 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:26.036 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.036 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.036 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:26.294 09:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 986b8843-9a9d-4597-8780-8a824f43e795 -t 2000 00:11:26.294 [ 00:11:26.294 { 00:11:26.294 "name": "986b8843-9a9d-4597-8780-8a824f43e795", 00:11:26.294 "aliases": [ 00:11:26.294 "lvs/lvol" 00:11:26.294 ], 00:11:26.294 "product_name": "Logical Volume", 00:11:26.294 "block_size": 4096, 00:11:26.294 "num_blocks": 38912, 00:11:26.294 "uuid": "986b8843-9a9d-4597-8780-8a824f43e795", 00:11:26.294 "assigned_rate_limits": { 00:11:26.294 "rw_ios_per_sec": 0, 00:11:26.294 "rw_mbytes_per_sec": 0, 00:11:26.294 "r_mbytes_per_sec": 0, 00:11:26.294 "w_mbytes_per_sec": 0 00:11:26.294 }, 00:11:26.294 "claimed": false, 00:11:26.294 "zoned": false, 00:11:26.294 "supported_io_types": { 00:11:26.294 "read": true, 00:11:26.294 "write": true, 00:11:26.294 "unmap": true, 00:11:26.294 "flush": false, 00:11:26.294 "reset": true, 00:11:26.294 "nvme_admin": false, 00:11:26.294 "nvme_io": false, 00:11:26.294 "nvme_io_md": false, 00:11:26.294 "write_zeroes": true, 00:11:26.294 "zcopy": false, 00:11:26.294 "get_zone_info": false, 00:11:26.294 "zone_management": false, 00:11:26.294 "zone_append": false, 00:11:26.294 "compare": false, 00:11:26.294 "compare_and_write": false, 00:11:26.294 "abort": false, 00:11:26.294 "seek_hole": true, 00:11:26.294 "seek_data": true, 00:11:26.294 "copy": false, 00:11:26.294 "nvme_iov_md": false 00:11:26.294 }, 00:11:26.294 "driver_specific": { 00:11:26.294 "lvol": { 00:11:26.294 "lvol_store_uuid": "018b7c37-6598-4bfb-b430-1c198067099a", 00:11:26.294 "base_bdev": "aio_bdev", 00:11:26.294 "thin_provision": false, 00:11:26.294 "num_allocated_clusters": 38, 00:11:26.294 "snapshot": false, 00:11:26.294 "clone": false, 00:11:26.294 "esnap_clone": false 00:11:26.294 } 00:11:26.294 } 00:11:26.294 } 00:11:26.294 ] 00:11:26.553 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:26.553 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:26.553 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:26.812 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:26.812 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:26.812 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:27.071 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:27.071 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 986b8843-9a9d-4597-8780-8a824f43e795 00:11:27.329 09:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 018b7c37-6598-4bfb-b430-1c198067099a 00:11:27.588 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:27.588 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:28.154 00:11:28.154 real 0m21.517s 00:11:28.154 user 0m45.504s 00:11:28.154 sys 0m8.936s 00:11:28.154 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.154 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:28.154 ************************************ 00:11:28.154 END TEST lvs_grow_dirty 00:11:28.154 ************************************ 00:11:28.154 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:28.154 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:11:28.154 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:11:28.154 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:28.155 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:28.155 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:28.155 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:28.155 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:28.155 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:28.155 nvmf_trace.0 00:11:28.155 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:11:28.155 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:28.155 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:28.155 09:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.413 rmmod nvme_tcp 00:11:28.413 rmmod nvme_fabrics 00:11:28.413 rmmod nvme_keyring 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 67903 ']' 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 67903 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 67903 ']' 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 67903 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67903 00:11:28.413 killing process with pid 67903 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67903' 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 67903 00:11:28.413 09:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 67903 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.349 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:11:29.608 00:11:29.608 real 0m44.299s 00:11:29.608 user 1m10.902s 00:11:29.608 sys 0m12.371s 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:29.608 ************************************ 00:11:29.608 END TEST nvmf_lvs_grow 00:11:29.608 ************************************ 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.608 ************************************ 00:11:29.608 START TEST nvmf_bdev_io_wait 00:11:29.608 ************************************ 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:29.608 * Looking for test storage... 00:11:29.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:29.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.608 --rc genhtml_branch_coverage=1 00:11:29.608 --rc genhtml_function_coverage=1 00:11:29.608 --rc genhtml_legend=1 00:11:29.608 --rc geninfo_all_blocks=1 00:11:29.608 --rc geninfo_unexecuted_blocks=1 00:11:29.608 00:11:29.608 ' 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:29.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.608 --rc genhtml_branch_coverage=1 00:11:29.608 --rc genhtml_function_coverage=1 00:11:29.608 --rc genhtml_legend=1 00:11:29.608 --rc geninfo_all_blocks=1 00:11:29.608 --rc geninfo_unexecuted_blocks=1 00:11:29.608 00:11:29.608 ' 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:29.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.608 --rc genhtml_branch_coverage=1 00:11:29.608 --rc genhtml_function_coverage=1 00:11:29.608 --rc genhtml_legend=1 00:11:29.608 --rc geninfo_all_blocks=1 00:11:29.608 --rc geninfo_unexecuted_blocks=1 00:11:29.608 00:11:29.608 ' 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:29.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.608 --rc genhtml_branch_coverage=1 00:11:29.608 --rc genhtml_function_coverage=1 00:11:29.608 --rc genhtml_legend=1 00:11:29.608 --rc geninfo_all_blocks=1 00:11:29.608 --rc geninfo_unexecuted_blocks=1 00:11:29.608 00:11:29.608 ' 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.608 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.868 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.869 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:29.869 Cannot find device "nvmf_init_br" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:29.869 Cannot find device "nvmf_init_br2" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:29.869 Cannot find device "nvmf_tgt_br" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.869 Cannot find device "nvmf_tgt_br2" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:29.869 Cannot find device "nvmf_init_br" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:29.869 Cannot find device "nvmf_init_br2" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:29.869 Cannot find device "nvmf_tgt_br" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:29.869 Cannot find device "nvmf_tgt_br2" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:29.869 Cannot find device "nvmf_br" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:29.869 Cannot find device "nvmf_init_if" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:29.869 Cannot find device "nvmf_init_if2" 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:29.869 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:30.129 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:30.129 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:11:30.129 00:11:30.129 --- 10.0.0.3 ping statistics --- 00:11:30.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.129 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:30.129 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:30.129 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:11:30.129 00:11:30.129 --- 10.0.0.4 ping statistics --- 00:11:30.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.129 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:30.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:30.129 00:11:30.129 --- 10.0.0.1 ping statistics --- 00:11:30.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.129 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:30.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:11:30.129 00:11:30.129 --- 10.0.0.2 ping statistics --- 00:11:30.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.129 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:30.129 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=68282 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 68282 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 68282 ']' 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.130 09:13:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:30.389 [2024-12-13 09:13:24.029343] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:30.389 [2024-12-13 09:13:24.029519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.389 [2024-12-13 09:13:24.211216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.648 [2024-12-13 09:13:24.308135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.648 [2024-12-13 09:13:24.308212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.648 [2024-12-13 09:13:24.308231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.648 [2024-12-13 09:13:24.308243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.648 [2024-12-13 09:13:24.308256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.648 [2024-12-13 09:13:24.310461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.648 [2024-12-13 09:13:24.310557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.648 [2024-12-13 09:13:24.310752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.648 [2024-12-13 09:13:24.310760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.215 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:31.476 [2024-12-13 09:13:25.253506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:31.476 [2024-12-13 09:13:25.269916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:31.476 Malloc0 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:31.476 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.734 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:31.735 [2024-12-13 09:13:25.376699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=68328 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=68330 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:31.735 { 00:11:31.735 "params": { 00:11:31.735 "name": "Nvme$subsystem", 00:11:31.735 "trtype": "$TEST_TRANSPORT", 00:11:31.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:31.735 "adrfam": "ipv4", 00:11:31.735 "trsvcid": "$NVMF_PORT", 00:11:31.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:31.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:31.735 "hdgst": ${hdgst:-false}, 00:11:31.735 "ddgst": ${ddgst:-false} 00:11:31.735 }, 00:11:31.735 "method": "bdev_nvme_attach_controller" 00:11:31.735 } 00:11:31.735 EOF 00:11:31.735 )") 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=68332 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:31.735 { 00:11:31.735 "params": { 00:11:31.735 "name": "Nvme$subsystem", 00:11:31.735 "trtype": "$TEST_TRANSPORT", 00:11:31.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:31.735 "adrfam": "ipv4", 00:11:31.735 "trsvcid": "$NVMF_PORT", 00:11:31.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:31.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:31.735 "hdgst": ${hdgst:-false}, 00:11:31.735 "ddgst": ${ddgst:-false} 00:11:31.735 }, 00:11:31.735 "method": "bdev_nvme_attach_controller" 00:11:31.735 } 00:11:31.735 EOF 00:11:31.735 )") 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=68334 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:31.735 { 00:11:31.735 "params": { 00:11:31.735 "name": "Nvme$subsystem", 00:11:31.735 "trtype": "$TEST_TRANSPORT", 00:11:31.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:31.735 "adrfam": "ipv4", 00:11:31.735 "trsvcid": "$NVMF_PORT", 00:11:31.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:31.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:31.735 "hdgst": ${hdgst:-false}, 00:11:31.735 "ddgst": ${ddgst:-false} 00:11:31.735 }, 00:11:31.735 "method": "bdev_nvme_attach_controller" 00:11:31.735 } 00:11:31.735 EOF 00:11:31.735 )") 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:31.735 { 00:11:31.735 "params": { 00:11:31.735 "name": "Nvme$subsystem", 00:11:31.735 "trtype": "$TEST_TRANSPORT", 00:11:31.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:31.735 "adrfam": "ipv4", 00:11:31.735 "trsvcid": "$NVMF_PORT", 00:11:31.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:31.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:31.735 "hdgst": ${hdgst:-false}, 00:11:31.735 "ddgst": ${ddgst:-false} 00:11:31.735 }, 00:11:31.735 "method": "bdev_nvme_attach_controller" 00:11:31.735 } 00:11:31.735 EOF 00:11:31.735 )") 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:31.735 "params": { 00:11:31.735 "name": "Nvme1", 00:11:31.735 "trtype": "tcp", 00:11:31.735 "traddr": "10.0.0.3", 00:11:31.735 "adrfam": "ipv4", 00:11:31.735 "trsvcid": "4420", 00:11:31.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:31.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:31.735 "hdgst": false, 00:11:31.735 "ddgst": false 00:11:31.735 }, 00:11:31.735 "method": "bdev_nvme_attach_controller" 00:11:31.735 }' 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:31.735 "params": { 00:11:31.735 "name": "Nvme1", 00:11:31.735 "trtype": "tcp", 00:11:31.735 "traddr": "10.0.0.3", 00:11:31.735 "adrfam": "ipv4", 00:11:31.735 "trsvcid": "4420", 00:11:31.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:31.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:31.735 "hdgst": false, 00:11:31.735 "ddgst": false 00:11:31.735 }, 00:11:31.735 "method": "bdev_nvme_attach_controller" 00:11:31.735 }' 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:31.735 "params": { 00:11:31.735 "name": "Nvme1", 00:11:31.735 "trtype": "tcp", 00:11:31.735 "traddr": "10.0.0.3", 00:11:31.735 "adrfam": "ipv4", 00:11:31.735 "trsvcid": "4420", 00:11:31.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:31.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:31.735 "hdgst": false, 00:11:31.735 "ddgst": false 00:11:31.735 }, 00:11:31.735 "method": "bdev_nvme_attach_controller" 00:11:31.735 }' 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:31.735 "params": { 00:11:31.735 "name": "Nvme1", 00:11:31.735 "trtype": "tcp", 00:11:31.735 "traddr": "10.0.0.3", 00:11:31.735 "adrfam": "ipv4", 00:11:31.735 "trsvcid": "4420", 00:11:31.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:31.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:31.735 "hdgst": false, 00:11:31.735 "ddgst": false 00:11:31.735 }, 00:11:31.735 "method": "bdev_nvme_attach_controller" 00:11:31.735 }' 00:11:31.735 09:13:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 68328 00:11:31.735 [2024-12-13 09:13:25.475426] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:31.735 [2024-12-13 09:13:25.475569] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:31.735 [2024-12-13 09:13:25.496857] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:31.735 [2024-12-13 09:13:25.497015] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:31.736 [2024-12-13 09:13:25.498265] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:31.736 [2024-12-13 09:13:25.498422] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:31.736 [2024-12-13 09:13:25.512624] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:31.736 [2024-12-13 09:13:25.512793] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:31.994 [2024-12-13 09:13:25.691249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.994 [2024-12-13 09:13:25.733738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.994 [2024-12-13 09:13:25.778857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.994 [2024-12-13 09:13:25.809331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:11:31.994 [2024-12-13 09:13:25.826841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.994 [2024-12-13 09:13:25.851792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:11:32.252 [2024-12-13 09:13:25.896374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:11:32.252 [2024-12-13 09:13:25.943345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:11:32.252 [2024-12-13 09:13:25.999943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:32.252 [2024-12-13 09:13:26.030001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:32.252 [2024-12-13 09:13:26.075703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:32.252 [2024-12-13 09:13:26.109596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:32.510 Running I/O for 1 seconds... 00:11:32.511 Running I/O for 1 seconds... 00:11:32.511 Running I/O for 1 seconds... 00:11:32.511 Running I/O for 1 seconds... 00:11:33.446 8697.00 IOPS, 33.97 MiB/s 00:11:33.446 Latency(us) 00:11:33.446 [2024-12-13T09:13:27.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.446 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:33.446 Nvme1n1 : 1.01 8753.29 34.19 0.00 0.00 14552.38 4557.73 21209.83 00:11:33.446 [2024-12-13T09:13:27.336Z] =================================================================================================================== 00:11:33.446 [2024-12-13T09:13:27.336Z] Total : 8753.29 34.19 0.00 0.00 14552.38 4557.73 21209.83 00:11:33.446 6041.00 IOPS, 23.60 MiB/s 00:11:33.446 Latency(us) 00:11:33.446 [2024-12-13T09:13:27.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.446 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:33.446 Nvme1n1 : 1.02 6080.85 23.75 0.00 0.00 20885.27 8996.31 28955.00 00:11:33.446 [2024-12-13T09:13:27.336Z] =================================================================================================================== 00:11:33.446 [2024-12-13T09:13:27.336Z] Total : 6080.85 23.75 0.00 0.00 20885.27 8996.31 28955.00 00:11:33.446 7118.00 IOPS, 27.80 MiB/s 00:11:33.446 Latency(us) 00:11:33.446 [2024-12-13T09:13:27.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.446 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:33.446 Nvme1n1 : 1.01 7192.48 28.10 0.00 0.00 17701.54 7864.32 25737.77 00:11:33.446 [2024-12-13T09:13:27.336Z] =================================================================================================================== 00:11:33.446 [2024-12-13T09:13:27.336Z] Total : 7192.48 28.10 0.00 0.00 17701.54 7864.32 25737.77 00:11:33.446 136856.00 IOPS, 534.59 MiB/s 00:11:33.446 Latency(us) 00:11:33.446 [2024-12-13T09:13:27.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.446 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:33.446 Nvme1n1 : 1.00 136552.80 533.41 0.00 0.00 932.52 446.84 2204.39 00:11:33.446 [2024-12-13T09:13:27.336Z] =================================================================================================================== 00:11:33.446 [2024-12-13T09:13:27.336Z] Total : 136552.80 533.41 0.00 0.00 932.52 446.84 2204.39 00:11:34.014 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 68330 00:11:34.014 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 68332 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 68334 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.273 09:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.273 rmmod nvme_tcp 00:11:34.273 rmmod nvme_fabrics 00:11:34.273 rmmod nvme_keyring 00:11:34.273 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.273 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:34.273 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:34.273 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 68282 ']' 00:11:34.273 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 68282 00:11:34.273 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 68282 ']' 00:11:34.273 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 68282 00:11:34.273 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:11:34.273 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.273 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68282 00:11:34.274 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.274 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.274 killing process with pid 68282 00:11:34.274 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68282' 00:11:34.274 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 68282 00:11:34.274 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 68282 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:35.209 09:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:35.209 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:35.209 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:35.209 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:35.209 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:11:35.468 00:11:35.468 real 0m5.826s 00:11:35.468 user 0m24.961s 00:11:35.468 sys 0m2.622s 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.468 ************************************ 00:11:35.468 END TEST nvmf_bdev_io_wait 00:11:35.468 ************************************ 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:35.468 ************************************ 00:11:35.468 START TEST nvmf_queue_depth 00:11:35.468 ************************************ 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:35.468 * Looking for test storage... 00:11:35.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:35.468 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.469 --rc genhtml_branch_coverage=1 00:11:35.469 --rc genhtml_function_coverage=1 00:11:35.469 --rc genhtml_legend=1 00:11:35.469 --rc geninfo_all_blocks=1 00:11:35.469 --rc geninfo_unexecuted_blocks=1 00:11:35.469 00:11:35.469 ' 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.469 --rc genhtml_branch_coverage=1 00:11:35.469 --rc genhtml_function_coverage=1 00:11:35.469 --rc genhtml_legend=1 00:11:35.469 --rc geninfo_all_blocks=1 00:11:35.469 --rc geninfo_unexecuted_blocks=1 00:11:35.469 00:11:35.469 ' 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.469 --rc genhtml_branch_coverage=1 00:11:35.469 --rc genhtml_function_coverage=1 00:11:35.469 --rc genhtml_legend=1 00:11:35.469 --rc geninfo_all_blocks=1 00:11:35.469 --rc geninfo_unexecuted_blocks=1 00:11:35.469 00:11:35.469 ' 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.469 --rc genhtml_branch_coverage=1 00:11:35.469 --rc genhtml_function_coverage=1 00:11:35.469 --rc genhtml_legend=1 00:11:35.469 --rc geninfo_all_blocks=1 00:11:35.469 --rc geninfo_unexecuted_blocks=1 00:11:35.469 00:11:35.469 ' 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.469 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.728 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:35.728 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:35.729 Cannot find device "nvmf_init_br" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:35.729 Cannot find device "nvmf_init_br2" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:35.729 Cannot find device "nvmf_tgt_br" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:35.729 Cannot find device "nvmf_tgt_br2" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:35.729 Cannot find device "nvmf_init_br" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:35.729 Cannot find device "nvmf_init_br2" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:35.729 Cannot find device "nvmf_tgt_br" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:35.729 Cannot find device "nvmf_tgt_br2" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:35.729 Cannot find device "nvmf_br" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:35.729 Cannot find device "nvmf_init_if" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:35.729 Cannot find device "nvmf_init_if2" 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:35.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:35.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:35.729 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:35.988 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:35.988 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:11:35.988 00:11:35.988 --- 10.0.0.3 ping statistics --- 00:11:35.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.988 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:35.988 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:35.988 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:35.988 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:11:35.988 00:11:35.988 --- 10.0.0.4 ping statistics --- 00:11:35.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.988 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:35.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:35.989 00:11:35.989 --- 10.0.0.1 ping statistics --- 00:11:35.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.989 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:35.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:35.989 00:11:35.989 --- 10.0.0.2 ping statistics --- 00:11:35.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.989 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=68632 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 68632 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68632 ']' 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.989 09:13:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:36.247 [2024-12-13 09:13:29.905607] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:36.248 [2024-12-13 09:13:29.905783] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.248 [2024-12-13 09:13:30.099344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.506 [2024-12-13 09:13:30.225111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.506 [2024-12-13 09:13:30.225191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.507 [2024-12-13 09:13:30.225215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.507 [2024-12-13 09:13:30.225243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.507 [2024-12-13 09:13:30.225260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.507 [2024-12-13 09:13:30.226736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.507 [2024-12-13 09:13:30.389562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:37.074 [2024-12-13 09:13:30.950733] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.074 09:13:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:37.333 Malloc0 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:37.333 [2024-12-13 09:13:31.057735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68664 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68664 /var/tmp/bdevperf.sock 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68664 ']' 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.333 09:13:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:37.333 [2024-12-13 09:13:31.174503] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:37.333 [2024-12-13 09:13:31.174689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68664 ] 00:11:37.640 [2024-12-13 09:13:31.362192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.640 [2024-12-13 09:13:31.487913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.899 [2024-12-13 09:13:31.666722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:38.466 09:13:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.466 09:13:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:38.466 09:13:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:38.466 09:13:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.466 09:13:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:38.466 NVMe0n1 00:11:38.466 09:13:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.466 09:13:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:38.724 Running I/O for 10 seconds... 00:11:40.593 6062.00 IOPS, 23.68 MiB/s [2024-12-13T09:13:35.415Z] 6153.00 IOPS, 24.04 MiB/s [2024-12-13T09:13:36.789Z] 6398.00 IOPS, 24.99 MiB/s [2024-12-13T09:13:37.724Z] 6408.25 IOPS, 25.03 MiB/s [2024-12-13T09:13:38.659Z] 6552.40 IOPS, 25.60 MiB/s [2024-12-13T09:13:39.594Z] 6602.00 IOPS, 25.79 MiB/s [2024-12-13T09:13:40.529Z] 6603.14 IOPS, 25.79 MiB/s [2024-12-13T09:13:41.467Z] 6660.25 IOPS, 26.02 MiB/s [2024-12-13T09:13:42.404Z] 6724.22 IOPS, 26.27 MiB/s [2024-12-13T09:13:42.664Z] 6813.90 IOPS, 26.62 MiB/s 00:11:48.774 Latency(us) 00:11:48.774 [2024-12-13T09:13:42.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.774 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:48.774 Verification LBA range: start 0x0 length 0x4000 00:11:48.774 NVMe0n1 : 10.09 6845.13 26.74 0.00 0.00 148706.54 23592.96 106287.48 00:11:48.774 [2024-12-13T09:13:42.664Z] =================================================================================================================== 00:11:48.774 [2024-12-13T09:13:42.664Z] Total : 6845.13 26.74 0.00 0.00 148706.54 23592.96 106287.48 00:11:48.774 { 00:11:48.774 "results": [ 00:11:48.774 { 00:11:48.774 "job": "NVMe0n1", 00:11:48.774 "core_mask": "0x1", 00:11:48.774 "workload": "verify", 00:11:48.774 "status": "finished", 00:11:48.774 "verify_range": { 00:11:48.774 "start": 0, 00:11:48.774 "length": 16384 00:11:48.774 }, 00:11:48.774 "queue_depth": 1024, 00:11:48.774 "io_size": 4096, 00:11:48.774 "runtime": 10.094622, 00:11:48.774 "iops": 6845.1300108116975, 00:11:48.774 "mibps": 26.738789104733193, 00:11:48.774 "io_failed": 0, 00:11:48.774 "io_timeout": 0, 00:11:48.774 "avg_latency_us": 148706.54430331185, 00:11:48.774 "min_latency_us": 23592.96, 00:11:48.774 "max_latency_us": 106287.47636363636 00:11:48.774 } 00:11:48.774 ], 00:11:48.774 "core_count": 1 00:11:48.774 } 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68664 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68664 ']' 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68664 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68664 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.774 killing process with pid 68664 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68664' 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68664 00:11:48.774 Received shutdown signal, test time was about 10.000000 seconds 00:11:48.774 00:11:48.774 Latency(us) 00:11:48.774 [2024-12-13T09:13:42.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.774 [2024-12-13T09:13:42.664Z] =================================================================================================================== 00:11:48.774 [2024-12-13T09:13:42.664Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:48.774 09:13:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68664 00:11:49.341 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:49.341 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:49.341 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.341 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.600 rmmod nvme_tcp 00:11:49.600 rmmod nvme_fabrics 00:11:49.600 rmmod nvme_keyring 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 68632 ']' 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 68632 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68632 ']' 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68632 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68632 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:49.600 killing process with pid 68632 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68632' 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68632 00:11:49.600 09:13:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68632 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:50.539 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:50.798 00:11:50.798 real 0m15.441s 00:11:50.798 user 0m26.001s 00:11:50.798 sys 0m2.251s 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:50.798 ************************************ 00:11:50.798 END TEST nvmf_queue_depth 00:11:50.798 ************************************ 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:50.798 ************************************ 00:11:50.798 START TEST nvmf_target_multipath 00:11:50.798 ************************************ 00:11:50.798 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:51.059 * Looking for test storage... 00:11:51.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:51.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.059 --rc genhtml_branch_coverage=1 00:11:51.059 --rc genhtml_function_coverage=1 00:11:51.059 --rc genhtml_legend=1 00:11:51.059 --rc geninfo_all_blocks=1 00:11:51.059 --rc geninfo_unexecuted_blocks=1 00:11:51.059 00:11:51.059 ' 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:51.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.059 --rc genhtml_branch_coverage=1 00:11:51.059 --rc genhtml_function_coverage=1 00:11:51.059 --rc genhtml_legend=1 00:11:51.059 --rc geninfo_all_blocks=1 00:11:51.059 --rc geninfo_unexecuted_blocks=1 00:11:51.059 00:11:51.059 ' 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:51.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.059 --rc genhtml_branch_coverage=1 00:11:51.059 --rc genhtml_function_coverage=1 00:11:51.059 --rc genhtml_legend=1 00:11:51.059 --rc geninfo_all_blocks=1 00:11:51.059 --rc geninfo_unexecuted_blocks=1 00:11:51.059 00:11:51.059 ' 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:51.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.059 --rc genhtml_branch_coverage=1 00:11:51.059 --rc genhtml_function_coverage=1 00:11:51.059 --rc genhtml_legend=1 00:11:51.059 --rc geninfo_all_blocks=1 00:11:51.059 --rc geninfo_unexecuted_blocks=1 00:11:51.059 00:11:51.059 ' 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.059 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.060 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:51.060 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:51.061 Cannot find device "nvmf_init_br" 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:51.061 Cannot find device "nvmf_init_br2" 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:51.061 Cannot find device "nvmf_tgt_br" 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:51.061 Cannot find device "nvmf_tgt_br2" 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:51.061 Cannot find device "nvmf_init_br" 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:51.061 Cannot find device "nvmf_init_br2" 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:51.061 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:51.326 Cannot find device "nvmf_tgt_br" 00:11:51.326 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:51.326 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:51.326 Cannot find device "nvmf_tgt_br2" 00:11:51.326 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:51.326 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:51.326 Cannot find device "nvmf_br" 00:11:51.326 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:51.326 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:51.326 Cannot find device "nvmf_init_if" 00:11:51.326 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:51.326 09:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:51.326 Cannot find device "nvmf_init_if2" 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:51.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:51.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:51.326 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:51.586 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:51.586 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:11:51.586 00:11:51.586 --- 10.0.0.3 ping statistics --- 00:11:51.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.586 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:51.586 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:51.586 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:11:51.586 00:11:51.586 --- 10.0.0.4 ping statistics --- 00:11:51.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.586 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:51.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:51.586 00:11:51.586 --- 10.0.0.1 ping statistics --- 00:11:51.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.586 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:51.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:11:51.586 00:11:51.586 --- 10.0.0.2 ping statistics --- 00:11:51.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.586 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.586 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=69057 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 69057 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 69057 ']' 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.587 09:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:51.587 [2024-12-13 09:13:45.410580] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:51.587 [2024-12-13 09:13:45.410776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.845 [2024-12-13 09:13:45.600456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.845 [2024-12-13 09:13:45.731179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.845 [2024-12-13 09:13:45.731244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.845 [2024-12-13 09:13:45.731266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.845 [2024-12-13 09:13:45.731303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.845 [2024-12-13 09:13:45.731331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.845 [2024-12-13 09:13:45.733586] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.104 [2024-12-13 09:13:45.733847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.104 [2024-12-13 09:13:45.733949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.104 [2024-12-13 09:13:45.734035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.104 [2024-12-13 09:13:45.933863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:52.672 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.672 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:11:52.672 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.672 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.672 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:52.672 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.672 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:52.931 [2024-12-13 09:13:46.734360] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.931 09:13:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:53.499 Malloc0 00:11:53.499 09:13:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:53.757 09:13:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.016 09:13:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:54.274 [2024-12-13 09:13:47.931291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:54.274 09:13:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:54.532 [2024-12-13 09:13:48.263558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:54.532 09:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:54.532 09:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:54.791 09:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.791 09:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.791 09:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.791 09:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.791 09:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:56.691 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=69153 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:56.949 09:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:56.949 [global] 00:11:56.949 thread=1 00:11:56.949 invalidate=1 00:11:56.949 rw=randrw 00:11:56.949 time_based=1 00:11:56.949 runtime=6 00:11:56.949 ioengine=libaio 00:11:56.949 direct=1 00:11:56.949 bs=4096 00:11:56.949 iodepth=128 00:11:56.949 norandommap=0 00:11:56.949 numjobs=1 00:11:56.949 00:11:56.949 verify_dump=1 00:11:56.949 verify_backlog=512 00:11:56.949 verify_state_save=0 00:11:56.949 do_verify=1 00:11:56.949 verify=crc32c-intel 00:11:56.950 [job0] 00:11:56.950 filename=/dev/nvme0n1 00:11:56.950 Could not set queue depth (nvme0n1) 00:11:56.950 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:56.950 fio-3.35 00:11:56.950 Starting 1 thread 00:11:57.884 09:13:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:58.142 09:13:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:58.400 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:58.658 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:58.916 09:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 69153 00:12:03.099 00:12:03.099 job0: (groupid=0, jobs=1): err= 0: pid=69182: Fri Dec 13 09:13:56 2024 00:12:03.099 read: IOPS=8513, BW=33.3MiB/s (34.9MB/s)(200MiB/6003msec) 00:12:03.099 slat (usec): min=3, max=7275, avg=71.38, stdev=272.41 00:12:03.099 clat (usec): min=1455, max=18541, avg=10310.46, stdev=1663.90 00:12:03.099 lat (usec): min=1835, max=18551, avg=10381.84, stdev=1667.50 00:12:03.099 clat percentiles (usec): 00:12:03.099 | 1.00th=[ 5407], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9503], 00:12:03.099 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:12:03.099 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[14353], 00:12:03.099 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16909], 99.95th=[17171], 00:12:03.099 | 99.99th=[17433] 00:12:03.099 bw ( KiB/s): min= 2696, max=21960, per=54.76%, avg=18647.27, stdev=5445.77, samples=11 00:12:03.099 iops : min= 674, max= 5490, avg=4661.82, stdev=1361.44, samples=11 00:12:03.099 write: IOPS=5076, BW=19.8MiB/s (20.8MB/s)(101MiB/5094msec); 0 zone resets 00:12:03.099 slat (usec): min=16, max=3192, avg=79.54, stdev=207.29 00:12:03.099 clat (usec): min=2705, max=17574, avg=9077.50, stdev=1547.70 00:12:03.099 lat (usec): min=2732, max=17623, avg=9157.04, stdev=1552.42 00:12:03.099 clat percentiles (usec): 00:12:03.099 | 1.00th=[ 4015], 5.00th=[ 5407], 10.00th=[ 7570], 20.00th=[ 8455], 00:12:03.099 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:12:03.099 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10290], 95.00th=[10683], 00:12:03.099 | 99.00th=[13829], 99.50th=[14615], 99.90th=[16188], 99.95th=[16712], 00:12:03.099 | 99.99th=[17433] 00:12:03.099 bw ( KiB/s): min= 2848, max=22456, per=91.84%, avg=18651.64, stdev=5444.33, samples=11 00:12:03.099 iops : min= 712, max= 5614, avg=4662.91, stdev=1361.08, samples=11 00:12:03.099 lat (msec) : 2=0.01%, 4=0.42%, 10=53.92%, 20=45.65% 00:12:03.099 cpu : usr=4.88%, sys=19.02%, ctx=4490, majf=0, minf=114 00:12:03.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:03.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:03.099 issued rwts: total=51107,25862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:03.099 00:12:03.099 Run status group 0 (all jobs): 00:12:03.099 READ: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=200MiB (209MB), run=6003-6003msec 00:12:03.099 WRITE: bw=19.8MiB/s (20.8MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=101MiB (106MB), run=5094-5094msec 00:12:03.099 00:12:03.099 Disk stats (read/write): 00:12:03.099 nvme0n1: ios=49930/25862, merge=0/0, ticks=496808/221658, in_queue=718466, util=98.48% 00:12:03.099 09:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:12:03.357 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=69261 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:03.922 09:13:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:12:03.922 [global] 00:12:03.922 thread=1 00:12:03.922 invalidate=1 00:12:03.922 rw=randrw 00:12:03.922 time_based=1 00:12:03.922 runtime=6 00:12:03.922 ioengine=libaio 00:12:03.922 direct=1 00:12:03.922 bs=4096 00:12:03.922 iodepth=128 00:12:03.922 norandommap=0 00:12:03.922 numjobs=1 00:12:03.922 00:12:03.922 verify_dump=1 00:12:03.922 verify_backlog=512 00:12:03.922 verify_state_save=0 00:12:03.922 do_verify=1 00:12:03.922 verify=crc32c-intel 00:12:03.922 [job0] 00:12:03.922 filename=/dev/nvme0n1 00:12:03.922 Could not set queue depth (nvme0n1) 00:12:03.922 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:03.922 fio-3.35 00:12:03.922 Starting 1 thread 00:12:04.853 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:05.110 09:13:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:12:05.367 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:12:05.367 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:05.367 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:05.367 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:05.367 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:05.367 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:05.368 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:12:05.368 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:05.368 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:05.368 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:05.368 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:05.368 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:05.368 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:05.624 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:05.881 09:13:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 69261 00:12:10.063 00:12:10.063 job0: (groupid=0, jobs=1): err= 0: pid=69282: Fri Dec 13 09:14:03 2024 00:12:10.063 read: IOPS=9654, BW=37.7MiB/s (39.5MB/s)(227MiB/6008msec) 00:12:10.063 slat (usec): min=4, max=7741, avg=53.53, stdev=241.16 00:12:10.063 clat (usec): min=349, max=17293, avg=9248.89, stdev=2560.11 00:12:10.063 lat (usec): min=361, max=17307, avg=9302.42, stdev=2580.00 00:12:10.063 clat percentiles (usec): 00:12:10.063 | 1.00th=[ 3032], 5.00th=[ 4621], 10.00th=[ 5538], 20.00th=[ 6915], 00:12:10.063 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10159], 00:12:10.063 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11469], 95.00th=[13960], 00:12:10.063 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16712], 99.95th=[16909], 00:12:10.063 | 99.99th=[17171] 00:12:10.063 bw ( KiB/s): min= 1296, max=30576, per=50.11%, avg=19353.25, stdev=7434.44, samples=12 00:12:10.063 iops : min= 324, max= 7644, avg=4838.25, stdev=1858.64, samples=12 00:12:10.063 write: IOPS=5435, BW=21.2MiB/s (22.3MB/s)(114MiB/5366msec); 0 zone resets 00:12:10.063 slat (usec): min=15, max=1994, avg=62.15, stdev=173.74 00:12:10.063 clat (usec): min=1520, max=17400, avg=7762.05, stdev=2389.68 00:12:10.063 lat (usec): min=1548, max=18228, avg=7824.20, stdev=2411.66 00:12:10.063 clat percentiles (usec): 00:12:10.063 | 1.00th=[ 2999], 5.00th=[ 3785], 10.00th=[ 4293], 20.00th=[ 5145], 00:12:10.063 | 30.00th=[ 5932], 40.00th=[ 7570], 50.00th=[ 8717], 60.00th=[ 9110], 00:12:10.063 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:12:10.063 | 99.00th=[13304], 99.50th=[14353], 99.90th=[15401], 99.95th=[15926], 00:12:10.063 | 99.99th=[17171] 00:12:10.063 bw ( KiB/s): min= 1384, max=31632, per=89.22%, avg=19399.92, stdev=7453.05, samples=12 00:12:10.063 iops : min= 346, max= 7908, avg=4849.92, stdev=1863.29, samples=12 00:12:10.063 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:12:10.063 lat (msec) : 2=0.18%, 4=4.18%, 10=60.60%, 20=35.00% 00:12:10.063 cpu : usr=5.49%, sys=19.83%, ctx=4898, majf=0, minf=102 00:12:10.063 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:10.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.063 issued rwts: total=58006,29168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.063 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.063 00:12:10.063 Run status group 0 (all jobs): 00:12:10.063 READ: bw=37.7MiB/s (39.5MB/s), 37.7MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=227MiB (238MB), run=6008-6008msec 00:12:10.063 WRITE: bw=21.2MiB/s (22.3MB/s), 21.2MiB/s-21.2MiB/s (22.3MB/s-22.3MB/s), io=114MiB (119MB), run=5366-5366msec 00:12:10.063 00:12:10.063 Disk stats (read/write): 00:12:10.063 nvme0n1: ios=57271/28716, merge=0/0, ticks=508665/210300, in_queue=718965, util=98.66% 00:12:10.063 09:14:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:10.321 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.321 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:12:10.321 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:10.321 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.321 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:10.321 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.321 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:12:10.321 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.578 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:10.578 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:10.578 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:10.578 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:12:10.578 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.578 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:10.578 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.578 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:10.578 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.578 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.578 rmmod nvme_tcp 00:12:10.578 rmmod nvme_fabrics 00:12:10.578 rmmod nvme_keyring 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 69057 ']' 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 69057 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 69057 ']' 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 69057 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69057 00:12:10.579 killing process with pid 69057 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69057' 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 69057 00:12:10.579 09:14:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 69057 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:12:11.951 00:12:11.951 real 0m21.133s 00:12:11.951 user 1m17.952s 00:12:11.951 sys 0m9.378s 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.951 ************************************ 00:12:11.951 END TEST nvmf_target_multipath 00:12:11.951 ************************************ 00:12:11.951 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:12.211 09:14:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:12.211 09:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.211 09:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.211 09:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:12.211 ************************************ 00:12:12.211 START TEST nvmf_zcopy 00:12:12.211 ************************************ 00:12:12.211 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:12.211 * Looking for test storage... 00:12:12.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.211 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:12.211 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:12.211 09:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:12.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.211 --rc genhtml_branch_coverage=1 00:12:12.211 --rc genhtml_function_coverage=1 00:12:12.211 --rc genhtml_legend=1 00:12:12.211 --rc geninfo_all_blocks=1 00:12:12.211 --rc geninfo_unexecuted_blocks=1 00:12:12.211 00:12:12.211 ' 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:12.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.211 --rc genhtml_branch_coverage=1 00:12:12.211 --rc genhtml_function_coverage=1 00:12:12.211 --rc genhtml_legend=1 00:12:12.211 --rc geninfo_all_blocks=1 00:12:12.211 --rc geninfo_unexecuted_blocks=1 00:12:12.211 00:12:12.211 ' 00:12:12.211 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:12.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.211 --rc genhtml_branch_coverage=1 00:12:12.211 --rc genhtml_function_coverage=1 00:12:12.211 --rc genhtml_legend=1 00:12:12.211 --rc geninfo_all_blocks=1 00:12:12.211 --rc geninfo_unexecuted_blocks=1 00:12:12.211 00:12:12.211 ' 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:12.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.212 --rc genhtml_branch_coverage=1 00:12:12.212 --rc genhtml_function_coverage=1 00:12:12.212 --rc genhtml_legend=1 00:12:12.212 --rc geninfo_all_blocks=1 00:12:12.212 --rc geninfo_unexecuted_blocks=1 00:12:12.212 00:12:12.212 ' 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.212 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:12.212 Cannot find device "nvmf_init_br" 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:12.212 Cannot find device "nvmf_init_br2" 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:12:12.212 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:12.471 Cannot find device "nvmf_tgt_br" 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:12.471 Cannot find device "nvmf_tgt_br2" 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:12.471 Cannot find device "nvmf_init_br" 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:12.471 Cannot find device "nvmf_init_br2" 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:12.471 Cannot find device "nvmf_tgt_br" 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:12.471 Cannot find device "nvmf_tgt_br2" 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:12.471 Cannot find device "nvmf_br" 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:12.471 Cannot find device "nvmf_init_if" 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:12.471 Cannot find device "nvmf_init_if2" 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:12.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:12.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:12.471 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:12.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:12.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:12.730 00:12:12.730 --- 10.0.0.3 ping statistics --- 00:12:12.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.730 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:12.730 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:12.730 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:12:12.730 00:12:12.730 --- 10.0.0.4 ping statistics --- 00:12:12.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.730 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:12.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:12:12.730 00:12:12.730 --- 10.0.0.1 ping statistics --- 00:12:12.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.730 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:12.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:12:12.730 00:12:12.730 --- 10.0.0.2 ping statistics --- 00:12:12.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.730 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:12.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=69618 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 69618 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 69618 ']' 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.730 09:14:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:12.989 [2024-12-13 09:14:06.625050] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:12.989 [2024-12-13 09:14:06.626047] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.989 [2024-12-13 09:14:06.818013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.246 [2024-12-13 09:14:06.944691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.247 [2024-12-13 09:14:06.944961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.247 [2024-12-13 09:14:06.945143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.247 [2024-12-13 09:14:06.945479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.247 [2024-12-13 09:14:06.945544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.247 [2024-12-13 09:14:06.947184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.504 [2024-12-13 09:14:07.136819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:13.763 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.763 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:12:13.763 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.763 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:13.764 [2024-12-13 09:14:07.583202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:13.764 [2024-12-13 09:14:07.599463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:13.764 malloc0 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.764 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:14.022 { 00:12:14.022 "params": { 00:12:14.022 "name": "Nvme$subsystem", 00:12:14.022 "trtype": "$TEST_TRANSPORT", 00:12:14.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.022 "adrfam": "ipv4", 00:12:14.022 "trsvcid": "$NVMF_PORT", 00:12:14.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.022 "hdgst": ${hdgst:-false}, 00:12:14.022 "ddgst": ${ddgst:-false} 00:12:14.022 }, 00:12:14.022 "method": "bdev_nvme_attach_controller" 00:12:14.022 } 00:12:14.022 EOF 00:12:14.022 )") 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:14.022 09:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:14.022 "params": { 00:12:14.022 "name": "Nvme1", 00:12:14.022 "trtype": "tcp", 00:12:14.022 "traddr": "10.0.0.3", 00:12:14.022 "adrfam": "ipv4", 00:12:14.022 "trsvcid": "4420", 00:12:14.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.022 "hdgst": false, 00:12:14.022 "ddgst": false 00:12:14.022 }, 00:12:14.022 "method": "bdev_nvme_attach_controller" 00:12:14.022 }' 00:12:14.022 [2024-12-13 09:14:07.767722] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:14.022 [2024-12-13 09:14:07.767885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69651 ] 00:12:14.280 [2024-12-13 09:14:07.953969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.280 [2024-12-13 09:14:08.078781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.538 [2024-12-13 09:14:08.263455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.796 Running I/O for 10 seconds... 00:12:16.662 5209.00 IOPS, 40.70 MiB/s [2024-12-13T09:14:11.487Z] 5215.50 IOPS, 40.75 MiB/s [2024-12-13T09:14:12.470Z] 5220.67 IOPS, 40.79 MiB/s [2024-12-13T09:14:13.843Z] 5223.00 IOPS, 40.80 MiB/s [2024-12-13T09:14:14.777Z] 5230.20 IOPS, 40.86 MiB/s [2024-12-13T09:14:15.712Z] 5231.50 IOPS, 40.87 MiB/s [2024-12-13T09:14:16.647Z] 5232.86 IOPS, 40.88 MiB/s [2024-12-13T09:14:17.582Z] 5246.12 IOPS, 40.99 MiB/s [2024-12-13T09:14:18.517Z] 5244.33 IOPS, 40.97 MiB/s [2024-12-13T09:14:18.517Z] 5236.30 IOPS, 40.91 MiB/s 00:12:24.627 Latency(us) 00:12:24.627 [2024-12-13T09:14:18.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.627 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:24.627 Verification LBA range: start 0x0 length 0x1000 00:12:24.627 Nvme1n1 : 10.02 5239.32 40.93 0.00 0.00 24364.07 3470.43 32172.22 00:12:24.627 [2024-12-13T09:14:18.517Z] =================================================================================================================== 00:12:24.627 [2024-12-13T09:14:18.517Z] Total : 5239.32 40.93 0.00 0.00 24364.07 3470.43 32172.22 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69781 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:25.562 { 00:12:25.562 "params": { 00:12:25.562 "name": "Nvme$subsystem", 00:12:25.562 "trtype": "$TEST_TRANSPORT", 00:12:25.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:25.562 "adrfam": "ipv4", 00:12:25.562 "trsvcid": "$NVMF_PORT", 00:12:25.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:25.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:25.562 "hdgst": ${hdgst:-false}, 00:12:25.562 "ddgst": ${ddgst:-false} 00:12:25.562 }, 00:12:25.562 "method": "bdev_nvme_attach_controller" 00:12:25.562 } 00:12:25.562 EOF 00:12:25.562 )") 00:12:25.562 [2024-12-13 09:14:19.333151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.562 [2024-12-13 09:14:19.333221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:25.562 09:14:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:25.562 "params": { 00:12:25.562 "name": "Nvme1", 00:12:25.562 "trtype": "tcp", 00:12:25.562 "traddr": "10.0.0.3", 00:12:25.562 "adrfam": "ipv4", 00:12:25.562 "trsvcid": "4420", 00:12:25.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:25.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:25.562 "hdgst": false, 00:12:25.562 "ddgst": false 00:12:25.562 }, 00:12:25.562 "method": "bdev_nvme_attach_controller" 00:12:25.562 }' 00:12:25.562 [2024-12-13 09:14:19.345043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.562 [2024-12-13 09:14:19.345108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.562 [2024-12-13 09:14:19.353060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.562 [2024-12-13 09:14:19.353100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.562 [2024-12-13 09:14:19.365035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.562 [2024-12-13 09:14:19.365097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.562 [2024-12-13 09:14:19.377065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.562 [2024-12-13 09:14:19.377106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.562 [2024-12-13 09:14:19.389077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.562 [2024-12-13 09:14:19.389327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.562 [2024-12-13 09:14:19.401064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.562 [2024-12-13 09:14:19.401105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.562 [2024-12-13 09:14:19.413077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.562 [2024-12-13 09:14:19.413134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.562 [2024-12-13 09:14:19.425118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.562 [2024-12-13 09:14:19.425162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.562 [2024-12-13 09:14:19.437068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.562 [2024-12-13 09:14:19.437125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.562 [2024-12-13 09:14:19.446159] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:25.562 [2024-12-13 09:14:19.446408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69781 ] 00:12:25.823 [2024-12-13 09:14:19.453101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.453148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.461082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.461328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.473099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.473139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.485100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.485160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.501098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.501137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.513102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.513162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.521194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.521240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.533118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.533347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.545122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.545163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.557092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.557150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.569155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.569209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.581126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.581375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.593123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.593162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.605164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.823 [2024-12-13 09:14:19.605220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.823 [2024-12-13 09:14:19.617138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.824 [2024-12-13 09:14:19.617175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.824 [2024-12-13 09:14:19.624252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.824 [2024-12-13 09:14:19.629156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.824 [2024-12-13 09:14:19.629215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.824 [2024-12-13 09:14:19.641207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.824 [2024-12-13 09:14:19.641259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.824 [2024-12-13 09:14:19.653155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.824 [2024-12-13 09:14:19.653196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.824 [2024-12-13 09:14:19.665177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.824 [2024-12-13 09:14:19.665217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.824 [2024-12-13 09:14:19.677176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.824 [2024-12-13 09:14:19.677218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.824 [2024-12-13 09:14:19.689166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.824 [2024-12-13 09:14:19.689406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:25.824 [2024-12-13 09:14:19.701185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:25.824 [2024-12-13 09:14:19.701242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.713207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.713443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.716003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.082 [2024-12-13 09:14:19.725176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.725433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.737254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.737351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.749211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.749273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.761200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.761239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.773202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.773244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.785269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.785356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.797258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.797592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.809217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.809433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.821212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.821468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.833229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.833466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.845212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.845452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.857253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.857473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.869237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.869475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.881233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.881451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.884168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:26.082 [2024-12-13 09:14:19.893353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.893651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.905279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.905553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.917237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.917479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.929259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.929472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.941242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.941484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.953259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.953481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.082 [2024-12-13 09:14:19.965276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.082 [2024-12-13 09:14:19.965515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:19.977260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:19.977501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:19.989699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:19.989887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.001758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.001986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.013768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.013972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.025803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.026020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.037751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.037961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.049909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.050119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.061828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.062022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 Running I/O for 5 seconds... 00:12:26.341 [2024-12-13 09:14:20.080378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.080555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.096226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.096289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.107363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.107418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.124368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.124429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.139629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.139686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.156743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.156807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.171452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.171496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.187669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.187732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.205194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.205238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.341 [2024-12-13 09:14:20.220468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.341 [2024-12-13 09:14:20.220536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.599 [2024-12-13 09:14:20.236281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.599 [2024-12-13 09:14:20.236355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.599 [2024-12-13 09:14:20.248039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.599 [2024-12-13 09:14:20.248098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.599 [2024-12-13 09:14:20.265314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.599 [2024-12-13 09:14:20.265374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.599 [2024-12-13 09:14:20.281126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.599 [2024-12-13 09:14:20.281189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.599 [2024-12-13 09:14:20.296793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.599 [2024-12-13 09:14:20.296852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.599 [2024-12-13 09:14:20.312752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.599 [2024-12-13 09:14:20.312813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.330250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.330322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.345544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.345592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.356030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.356072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.372556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.372620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.387029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.387246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.402590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.402849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.418748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.418982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.434180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.434440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.449912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.450106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.461755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.461946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.600 [2024-12-13 09:14:20.477487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.600 [2024-12-13 09:14:20.477687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.494127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.494356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.510495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.510720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.527313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.527541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.543017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.543245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.556312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.556526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.576550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.576730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.593135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.593400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.609878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.610100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.627233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.627483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.644061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.644241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.660129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.660356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.670669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.858 [2024-12-13 09:14:20.670848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.858 [2024-12-13 09:14:20.686278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.859 [2024-12-13 09:14:20.686510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.859 [2024-12-13 09:14:20.701153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.859 [2024-12-13 09:14:20.701375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.859 [2024-12-13 09:14:20.716818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.859 [2024-12-13 09:14:20.717007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.859 [2024-12-13 09:14:20.732896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.859 [2024-12-13 09:14:20.733072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:26.859 [2024-12-13 09:14:20.742721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:26.859 [2024-12-13 09:14:20.742929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.117 [2024-12-13 09:14:20.759645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.117 [2024-12-13 09:14:20.759821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.117 [2024-12-13 09:14:20.775855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.117 [2024-12-13 09:14:20.776053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.117 [2024-12-13 09:14:20.792947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.117 [2024-12-13 09:14:20.793136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.810094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.810278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.825504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.825697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.841775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.841974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.858544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.858867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.875171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.875410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.892018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.892260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.909372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.909648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.925874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.925916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.944152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.944214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.958862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.958905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.974993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.975070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.118 [2024-12-13 09:14:20.992252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.118 [2024-12-13 09:14:20.992471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.007810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.008124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.024705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.024749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.041117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.041179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.057918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.057960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 9911.00 IOPS, 77.43 MiB/s [2024-12-13T09:14:21.266Z] [2024-12-13 09:14:21.073094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.073156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.089212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.089255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.099671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.099732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.116683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.116790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.130574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.130822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.146762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.146805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.161543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.161761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.178293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.178388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.194081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.194160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.209337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.209379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.225274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.225380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.241953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.241996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.376 [2024-12-13 09:14:21.259945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.376 [2024-12-13 09:14:21.260153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.273119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.273165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.291791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.291852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.307734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.307776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.323797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.323858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.334632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.334939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.350658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.350751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.366475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.366517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.383221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.383344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.400413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.400456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.416979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.417042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.433123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.433166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.443707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.443769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.459457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.459499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.476038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.476132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.493112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.493330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.635 [2024-12-13 09:14:21.508744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.635 [2024-12-13 09:14:21.508944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.526081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.526140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.541221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.541286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.553878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.553923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.572445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.572506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.586621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.586835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.601618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.601811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.615516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.615561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.632612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.632806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.649114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.649158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.665377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.665436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.677728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.677772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.693600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.693669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.709098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.709178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.720900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.720952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.737454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.737501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.752655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.752728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:27.894 [2024-12-13 09:14:21.769428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:27.894 [2024-12-13 09:14:21.769487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.153 [2024-12-13 09:14:21.785854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.153 [2024-12-13 09:14:21.785927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.801608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.801667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.817921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.817992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.835637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.835697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.850122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.850181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.866331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.866388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.883899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.883958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.900381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.900447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.919557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.919617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.934063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.934121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.950857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.950916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.968294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.968364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:21.984619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:21.984692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:22.001882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:22.001942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:22.019204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:22.019264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.154 [2024-12-13 09:14:22.032278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.154 [2024-12-13 09:14:22.032367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.050975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.051066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.065450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.065508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 9872.00 IOPS, 77.12 MiB/s [2024-12-13T09:14:22.303Z] [2024-12-13 09:14:22.082070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.082131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.098229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.098326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.115222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.115305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.131998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.132059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.147903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.147966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.163199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.163257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.179191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.179250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.196041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.196101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.212843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.212901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.228757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.228816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.244565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.244614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.257488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.257550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.276081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.276143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.413 [2024-12-13 09:14:22.293898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.413 [2024-12-13 09:14:22.293948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.310418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.310481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.323776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.323851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.343778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.343854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.361253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.361324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.376935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.376997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.389073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.389135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.405241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.405327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.421040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.421098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.437136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.437193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.454791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.454851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.470260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.470344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.486037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.486096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.501727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.501784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.514714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.514764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.532522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.532574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.672 [2024-12-13 09:14:22.547449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.672 [2024-12-13 09:14:22.547509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.562881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.562932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.579907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.579972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.596881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.596948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.612975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.613036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.629329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.629399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.639234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.639332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.656233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.656316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.672537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.672599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.688015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.688073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.703777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.703835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.716071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.716129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.734000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.734072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.749550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.749616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.765570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.765621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.783131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.783191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.796233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.796318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.931 [2024-12-13 09:14:22.813480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.931 [2024-12-13 09:14:22.813540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.829549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.829612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.846145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.846218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.863231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.863316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.879219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.879319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.890420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.890493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.906724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.906806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.921749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.921807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.933249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.933336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.949580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.949640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.965528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.965585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.981945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.982003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:22.998794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:22.998881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:23.014892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:23.014963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:23.031218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:23.031274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:23.041426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:23.041485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 [2024-12-13 09:14:23.057633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:23.057707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.191 9739.67 IOPS, 76.09 MiB/s [2024-12-13T09:14:23.081Z] [2024-12-13 09:14:23.074180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.191 [2024-12-13 09:14:23.074239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.089536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.089594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.105273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.105373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.122404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.122462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.139277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.139376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.156676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.156734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.171444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.171505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.187858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.187916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.204319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.204387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.220351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.220418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.231429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.231487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.247668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.247737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.262579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.262637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.277565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.277623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.294701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.294762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.310135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.310208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.451 [2024-12-13 09:14:23.327777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.451 [2024-12-13 09:14:23.327852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.340905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.340962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.358884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.358933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.374529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.374589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.392382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.392453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.407563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.407624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.423193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.423250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.434081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.434138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.450335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.450421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.466406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.466465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.481545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.481603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.497213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.497271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.512612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.512685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.529076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.529136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.546475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.546533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.561822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.561882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.711 [2024-12-13 09:14:23.578535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.711 [2024-12-13 09:14:23.578593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.712 [2024-12-13 09:14:23.594525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.712 [2024-12-13 09:14:23.594584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.606483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.606542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.623669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.623741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.640545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.640604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.656399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.656457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.667580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.667639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.684450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.684522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.700065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.700122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.716411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.716468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.733392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.733448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.749056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.749114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.760259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.760347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.775625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.775690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.790428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.790487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.805634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.805725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.822174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.822232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.838956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.839043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.971 [2024-12-13 09:14:23.850538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.971 [2024-12-13 09:14:23.850597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:23.867731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:23.867820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:23.884041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:23.884102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:23.900903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:23.900977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:23.914359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:23.914430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:23.932624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:23.932698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:23.948988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:23.949048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:23.962031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:23.962090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:23.980235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:23.980337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:23.995730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:23.995789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:24.011740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:24.011798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:24.023307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:24.023393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:24.039875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:24.039934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:24.055990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:24.056048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 9702.50 IOPS, 75.80 MiB/s [2024-12-13T09:14:24.120Z] [2024-12-13 09:14:24.072333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:24.072402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:24.089898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:24.089956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.230 [2024-12-13 09:14:24.106048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.230 [2024-12-13 09:14:24.106106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.124141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.124197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.139704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.139774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.150607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.150690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.167377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.167433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.182237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.182320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.197659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.197717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.209031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.209089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.225934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.225996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.241979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.242038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.259132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.259190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.275531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.275591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.292815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.292873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.308280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.308385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.323912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.323970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.335383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.335441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.353162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.353236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.489 [2024-12-13 09:14:24.369487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.489 [2024-12-13 09:14:24.369549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.382813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.382877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.401206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.401263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.416900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.416961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.434311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.434386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.451058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.451115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.466838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.466898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.483293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.483394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.499253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.499325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.515776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.515850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.531670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.531741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.548163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.548237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.565531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.565590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.581107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.581181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.597282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.597350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.608159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.608232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.748 [2024-12-13 09:14:24.624293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.748 [2024-12-13 09:14:24.624361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.637557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.006 [2024-12-13 09:14:24.637616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.652965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.006 [2024-12-13 09:14:24.653027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.671386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.006 [2024-12-13 09:14:24.671444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.686218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.006 [2024-12-13 09:14:24.686320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.703404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.006 [2024-12-13 09:14:24.703462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.719395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.006 [2024-12-13 09:14:24.719459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.730302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.006 [2024-12-13 09:14:24.730372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.746641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.006 [2024-12-13 09:14:24.746727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.762977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.006 [2024-12-13 09:14:24.763025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.775770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.006 [2024-12-13 09:14:24.775845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.006 [2024-12-13 09:14:24.792974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.007 [2024-12-13 09:14:24.793032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.007 [2024-12-13 09:14:24.807990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.007 [2024-12-13 09:14:24.808047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.007 [2024-12-13 09:14:24.823889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.007 [2024-12-13 09:14:24.823946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.007 [2024-12-13 09:14:24.834555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.007 [2024-12-13 09:14:24.834613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.007 [2024-12-13 09:14:24.850274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.007 [2024-12-13 09:14:24.850359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.007 [2024-12-13 09:14:24.866551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.007 [2024-12-13 09:14:24.866608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.007 [2024-12-13 09:14:24.881550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.007 [2024-12-13 09:14:24.881608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:24.897640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:24.897702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:24.914812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:24.914873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:24.932205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:24.932262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:24.949127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:24.949216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:24.965985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:24.966045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:24.982627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:24.982699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:24.995549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:24.995597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:25.014991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.015067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:25.031389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.031449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:25.043961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.044020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:25.058725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.058786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 9674.00 IOPS, 75.58 MiB/s [2024-12-13T09:14:25.156Z] [2024-12-13 09:14:25.073334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.073404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 00:12:31.266 Latency(us) 00:12:31.266 [2024-12-13T09:14:25.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.266 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:31.266 Nvme1n1 : 5.01 9673.70 75.58 0.00 0.00 13212.04 5362.04 23831.27 00:12:31.266 [2024-12-13T09:14:25.156Z] =================================================================================================================== 00:12:31.266 [2024-12-13T09:14:25.156Z] Total : 9673.70 75.58 0.00 0.00 13212.04 5362.04 23831.27 00:12:31.266 [2024-12-13 09:14:25.084092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.084165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:25.096068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.096140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:25.108092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.108162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:25.120083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.120153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:25.132161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.132234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.266 [2024-12-13 09:14:25.144095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.266 [2024-12-13 09:14:25.144164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.156071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.156139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.168092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.168159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.180094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.180162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.192084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.192136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.204260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.204389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.216240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.216357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.228195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.228306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.240167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.240204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.252144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.252227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.264162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.264230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.276159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.276210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.288151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.288202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.300165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.300215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.312169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.312220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.324205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.324274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.336204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.336260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.348178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.348234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.360270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.360364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.372202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.372257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.384195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.384248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.396201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.396251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.544 [2024-12-13 09:14:25.408282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.544 [2024-12-13 09:14:25.408384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.420256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.420360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.432293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.432369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.444240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.444327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.456252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.456336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.468221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.468273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.480208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.480259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.492239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.492332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.504246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.504326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.516225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.516320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.528248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.528329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.540234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.540327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.552318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.552398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.564277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.564362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.576260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.576341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.588346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.588395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.600280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.600348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.612267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.612346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.624311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.624376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.636276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.636355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.648320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.648385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.660413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.660479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.672340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.672405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.684389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.684453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.818 [2024-12-13 09:14:25.696371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.818 [2024-12-13 09:14:25.696435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.708379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.708430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.720413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.720466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.732386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.732438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.744444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.744509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.756416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.756469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.768401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.768454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.780415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.780468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.792442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.792496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.804443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.804496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.816435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.816489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.828427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.828481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.840528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.840583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 [2024-12-13 09:14:25.852474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.077 [2024-12-13 09:14:25.852512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.077 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69781) - No such process 00:12:32.077 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69781 00:12:32.077 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.077 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.077 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:32.077 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.077 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:32.078 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.078 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:32.078 delay0 00:12:32.078 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.078 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:32.078 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.078 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:32.078 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.078 09:14:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:12:32.336 [2024-12-13 09:14:26.104813] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:38.900 Initializing NVMe Controllers 00:12:38.900 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:38.900 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:38.900 Initialization complete. Launching workers. 00:12:38.900 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 76 00:12:38.900 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 363, failed to submit 33 00:12:38.900 success 249, unsuccessful 114, failed 0 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:38.900 rmmod nvme_tcp 00:12:38.900 rmmod nvme_fabrics 00:12:38.900 rmmod nvme_keyring 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 69618 ']' 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 69618 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 69618 ']' 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 69618 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69618 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:38.900 killing process with pid 69618 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69618' 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 69618 00:12:38.900 09:14:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 69618 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:39.467 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:12:39.725 00:12:39.725 real 0m27.683s 00:12:39.725 user 0m45.392s 00:12:39.725 sys 0m6.952s 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:39.725 ************************************ 00:12:39.725 END TEST nvmf_zcopy 00:12:39.725 ************************************ 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:39.725 ************************************ 00:12:39.725 START TEST nvmf_nmic 00:12:39.725 ************************************ 00:12:39.725 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:39.985 * Looking for test storage... 00:12:39.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:39.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.985 --rc genhtml_branch_coverage=1 00:12:39.985 --rc genhtml_function_coverage=1 00:12:39.985 --rc genhtml_legend=1 00:12:39.985 --rc geninfo_all_blocks=1 00:12:39.985 --rc geninfo_unexecuted_blocks=1 00:12:39.985 00:12:39.985 ' 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:39.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.985 --rc genhtml_branch_coverage=1 00:12:39.985 --rc genhtml_function_coverage=1 00:12:39.985 --rc genhtml_legend=1 00:12:39.985 --rc geninfo_all_blocks=1 00:12:39.985 --rc geninfo_unexecuted_blocks=1 00:12:39.985 00:12:39.985 ' 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:39.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.985 --rc genhtml_branch_coverage=1 00:12:39.985 --rc genhtml_function_coverage=1 00:12:39.985 --rc genhtml_legend=1 00:12:39.985 --rc geninfo_all_blocks=1 00:12:39.985 --rc geninfo_unexecuted_blocks=1 00:12:39.985 00:12:39.985 ' 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:39.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.985 --rc genhtml_branch_coverage=1 00:12:39.985 --rc genhtml_function_coverage=1 00:12:39.985 --rc genhtml_legend=1 00:12:39.985 --rc geninfo_all_blocks=1 00:12:39.985 --rc geninfo_unexecuted_blocks=1 00:12:39.985 00:12:39.985 ' 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:39.985 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.986 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:39.986 Cannot find device "nvmf_init_br" 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:39.986 Cannot find device "nvmf_init_br2" 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:39.986 Cannot find device "nvmf_tgt_br" 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.986 Cannot find device "nvmf_tgt_br2" 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:12:39.986 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:40.244 Cannot find device "nvmf_init_br" 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:40.244 Cannot find device "nvmf_init_br2" 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:40.244 Cannot find device "nvmf_tgt_br" 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:40.244 Cannot find device "nvmf_tgt_br2" 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:40.244 Cannot find device "nvmf_br" 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:40.244 Cannot find device "nvmf_init_if" 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:40.244 Cannot find device "nvmf_init_if2" 00:12:40.244 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:12:40.245 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:40.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:40.245 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:12:40.245 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:40.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:40.245 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:12:40.245 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:40.245 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:40.245 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:40.245 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:40.245 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:40.245 09:14:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:40.245 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:40.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:40.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:12:40.503 00:12:40.503 --- 10.0.0.3 ping statistics --- 00:12:40.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.503 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:40.503 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:40.503 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:12:40.503 00:12:40.503 --- 10.0.0.4 ping statistics --- 00:12:40.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.503 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:40.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:40.503 00:12:40.503 --- 10.0.0.1 ping statistics --- 00:12:40.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.503 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:40.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:40.503 00:12:40.503 --- 10.0.0.2 ping statistics --- 00:12:40.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.503 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=70184 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 70184 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 70184 ']' 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.503 09:14:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:40.503 [2024-12-13 09:14:34.383121] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:40.503 [2024-12-13 09:14:34.383310] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.761 [2024-12-13 09:14:34.562892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.020 [2024-12-13 09:14:34.658049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.020 [2024-12-13 09:14:34.658116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.020 [2024-12-13 09:14:34.658132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.020 [2024-12-13 09:14:34.658142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.020 [2024-12-13 09:14:34.658152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.020 [2024-12-13 09:14:34.659995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.020 [2024-12-13 09:14:34.660177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.020 [2024-12-13 09:14:34.660360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.020 [2024-12-13 09:14:34.660914] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.020 [2024-12-13 09:14:34.824095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:41.586 [2024-12-13 09:14:35.435551] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.586 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:41.845 Malloc0 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:41.845 [2024-12-13 09:14:35.553337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:41.845 test case1: single bdev can't be used in multiple subsystems 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:41.845 [2024-12-13 09:14:35.577010] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:41.845 [2024-12-13 09:14:35.577073] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:41.845 [2024-12-13 09:14:35.577092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.845 request: 00:12:41.845 { 00:12:41.845 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:41.845 "namespace": { 00:12:41.845 "bdev_name": "Malloc0", 00:12:41.845 "no_auto_visible": false, 00:12:41.845 "hide_metadata": false 00:12:41.845 }, 00:12:41.845 "method": "nvmf_subsystem_add_ns", 00:12:41.845 "req_id": 1 00:12:41.845 } 00:12:41.845 Got JSON-RPC error response 00:12:41.845 response: 00:12:41.845 { 00:12:41.845 "code": -32602, 00:12:41.845 "message": "Invalid parameters" 00:12:41.845 } 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:41.845 Adding namespace failed - expected result. 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:41.845 test case2: host connect to nvmf target in multiple paths 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:41.845 [2024-12-13 09:14:35.593251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.845 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:42.104 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:12:42.104 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.104 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:12:42.104 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.104 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:42.104 09:14:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:12:44.009 09:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:44.009 09:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:44.009 09:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.270 09:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:44.270 09:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.270 09:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:12:44.270 09:14:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:44.270 [global] 00:12:44.270 thread=1 00:12:44.270 invalidate=1 00:12:44.270 rw=write 00:12:44.270 time_based=1 00:12:44.270 runtime=1 00:12:44.270 ioengine=libaio 00:12:44.270 direct=1 00:12:44.270 bs=4096 00:12:44.270 iodepth=1 00:12:44.270 norandommap=0 00:12:44.270 numjobs=1 00:12:44.270 00:12:44.270 verify_dump=1 00:12:44.270 verify_backlog=512 00:12:44.270 verify_state_save=0 00:12:44.270 do_verify=1 00:12:44.270 verify=crc32c-intel 00:12:44.270 [job0] 00:12:44.270 filename=/dev/nvme0n1 00:12:44.270 Could not set queue depth (nvme0n1) 00:12:44.270 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:44.270 fio-3.35 00:12:44.270 Starting 1 thread 00:12:45.648 00:12:45.648 job0: (groupid=0, jobs=1): err= 0: pid=70276: Fri Dec 13 09:14:39 2024 00:12:45.648 read: IOPS=2230, BW=8923KiB/s (9137kB/s)(8932KiB/1001msec) 00:12:45.648 slat (nsec): min=12561, max=87065, avg=17532.68, stdev=7034.25 00:12:45.648 clat (usec): min=177, max=349, avg=232.44, stdev=25.27 00:12:45.648 lat (usec): min=195, max=384, avg=249.98, stdev=26.98 00:12:45.648 clat percentiles (usec): 00:12:45.648 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:12:45.648 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:12:45.648 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:12:45.648 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 318], 99.95th=[ 347], 00:12:45.648 | 99.99th=[ 351] 00:12:45.648 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:45.648 slat (usec): min=17, max=124, avg=24.40, stdev= 9.08 00:12:45.648 clat (usec): min=109, max=294, avg=144.53, stdev=20.18 00:12:45.648 lat (usec): min=129, max=419, avg=168.93, stdev=23.67 00:12:45.648 clat percentiles (usec): 00:12:45.648 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:12:45.648 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 145], 00:12:45.648 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 174], 95.00th=[ 182], 00:12:45.648 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 227], 99.95th=[ 237], 00:12:45.648 | 99.99th=[ 293] 00:12:45.648 bw ( KiB/s): min=11480, max=11480, per=100.00%, avg=11480.00, stdev= 0.00, samples=1 00:12:45.648 iops : min= 2870, max= 2870, avg=2870.00, stdev= 0.00, samples=1 00:12:45.648 lat (usec) : 250=88.50%, 500=11.50% 00:12:45.648 cpu : usr=3.00%, sys=7.10%, ctx=4793, majf=0, minf=5 00:12:45.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:45.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.649 issued rwts: total=2233,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:45.649 00:12:45.649 Run status group 0 (all jobs): 00:12:45.649 READ: bw=8923KiB/s (9137kB/s), 8923KiB/s-8923KiB/s (9137kB/s-9137kB/s), io=8932KiB (9146kB), run=1001-1001msec 00:12:45.649 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:45.649 00:12:45.649 Disk stats (read/write): 00:12:45.649 nvme0n1: ios=2097/2194, merge=0/0, ticks=540/368, in_queue=908, util=91.47% 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.649 rmmod nvme_tcp 00:12:45.649 rmmod nvme_fabrics 00:12:45.649 rmmod nvme_keyring 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 70184 ']' 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 70184 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 70184 ']' 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 70184 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70184 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.649 killing process with pid 70184 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70184' 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 70184 00:12:45.649 09:14:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 70184 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:12:47.025 00:12:47.025 real 0m7.227s 00:12:47.025 user 0m21.810s 00:12:47.025 sys 0m2.478s 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.025 ************************************ 00:12:47.025 END TEST nvmf_nmic 00:12:47.025 ************************************ 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:47.025 ************************************ 00:12:47.025 START TEST nvmf_fio_target 00:12:47.025 ************************************ 00:12:47.025 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:47.285 * Looking for test storage... 00:12:47.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:47.285 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:47.285 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:47.285 09:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:47.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.285 --rc genhtml_branch_coverage=1 00:12:47.285 --rc genhtml_function_coverage=1 00:12:47.285 --rc genhtml_legend=1 00:12:47.285 --rc geninfo_all_blocks=1 00:12:47.285 --rc geninfo_unexecuted_blocks=1 00:12:47.285 00:12:47.285 ' 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:47.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.285 --rc genhtml_branch_coverage=1 00:12:47.285 --rc genhtml_function_coverage=1 00:12:47.285 --rc genhtml_legend=1 00:12:47.285 --rc geninfo_all_blocks=1 00:12:47.285 --rc geninfo_unexecuted_blocks=1 00:12:47.285 00:12:47.285 ' 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:47.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.285 --rc genhtml_branch_coverage=1 00:12:47.285 --rc genhtml_function_coverage=1 00:12:47.285 --rc genhtml_legend=1 00:12:47.285 --rc geninfo_all_blocks=1 00:12:47.285 --rc geninfo_unexecuted_blocks=1 00:12:47.285 00:12:47.285 ' 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:47.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.285 --rc genhtml_branch_coverage=1 00:12:47.285 --rc genhtml_function_coverage=1 00:12:47.285 --rc genhtml_legend=1 00:12:47.285 --rc geninfo_all_blocks=1 00:12:47.285 --rc geninfo_unexecuted_blocks=1 00:12:47.285 00:12:47.285 ' 00:12:47.285 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.286 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:47.286 Cannot find device "nvmf_init_br" 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:47.286 Cannot find device "nvmf_init_br2" 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:47.286 Cannot find device "nvmf_tgt_br" 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:47.286 Cannot find device "nvmf_tgt_br2" 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:47.286 Cannot find device "nvmf_init_br" 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:47.286 Cannot find device "nvmf_init_br2" 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:47.286 Cannot find device "nvmf_tgt_br" 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:12:47.286 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:47.545 Cannot find device "nvmf_tgt_br2" 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:47.545 Cannot find device "nvmf_br" 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:47.545 Cannot find device "nvmf_init_if" 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:47.545 Cannot find device "nvmf_init_if2" 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:47.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:47.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:47.545 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:47.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:47.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:12:47.804 00:12:47.804 --- 10.0.0.3 ping statistics --- 00:12:47.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.804 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:47.804 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:47.804 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:12:47.804 00:12:47.804 --- 10.0.0.4 ping statistics --- 00:12:47.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.804 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:47.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:47.804 00:12:47.804 --- 10.0.0.1 ping statistics --- 00:12:47.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.804 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:47.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:12:47.804 00:12:47.804 --- 10.0.0.2 ping statistics --- 00:12:47.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.804 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.804 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=70517 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 70517 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 70517 ']' 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.805 09:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.805 [2024-12-13 09:14:41.618828] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:47.805 [2024-12-13 09:14:41.618996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.063 [2024-12-13 09:14:41.798830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.063 [2024-12-13 09:14:41.892754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.063 [2024-12-13 09:14:41.892828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.063 [2024-12-13 09:14:41.892846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.063 [2024-12-13 09:14:41.892857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.063 [2024-12-13 09:14:41.892868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.063 [2024-12-13 09:14:41.894778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.063 [2024-12-13 09:14:41.894924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.063 [2024-12-13 09:14:41.895106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.063 [2024-12-13 09:14:41.896030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.322 [2024-12-13 09:14:42.069324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:48.889 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.890 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:48.890 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.890 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:48.890 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.890 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.890 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:49.149 [2024-12-13 09:14:42.926123] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.149 09:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:49.408 09:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:49.408 09:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:49.974 09:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:49.974 09:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.232 09:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:50.232 09:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.491 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:50.491 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:50.749 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:51.008 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:51.008 09:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:51.574 09:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:51.574 09:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:51.832 09:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:51.832 09:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:52.091 09:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.350 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:52.350 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.608 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:52.608 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.866 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:53.125 [2024-12-13 09:14:46.817604] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:53.125 09:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:53.383 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:53.642 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:53.642 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:53.642 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:53.642 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.642 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:53.642 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:53.642 09:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:56.174 09:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:56.174 09:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:56.174 09:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.174 09:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:56.174 09:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.174 09:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:56.174 09:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:56.174 [global] 00:12:56.174 thread=1 00:12:56.174 invalidate=1 00:12:56.174 rw=write 00:12:56.174 time_based=1 00:12:56.174 runtime=1 00:12:56.174 ioengine=libaio 00:12:56.174 direct=1 00:12:56.174 bs=4096 00:12:56.174 iodepth=1 00:12:56.174 norandommap=0 00:12:56.174 numjobs=1 00:12:56.174 00:12:56.174 verify_dump=1 00:12:56.174 verify_backlog=512 00:12:56.174 verify_state_save=0 00:12:56.174 do_verify=1 00:12:56.174 verify=crc32c-intel 00:12:56.174 [job0] 00:12:56.174 filename=/dev/nvme0n1 00:12:56.174 [job1] 00:12:56.174 filename=/dev/nvme0n2 00:12:56.174 [job2] 00:12:56.174 filename=/dev/nvme0n3 00:12:56.174 [job3] 00:12:56.174 filename=/dev/nvme0n4 00:12:56.174 Could not set queue depth (nvme0n1) 00:12:56.174 Could not set queue depth (nvme0n2) 00:12:56.174 Could not set queue depth (nvme0n3) 00:12:56.174 Could not set queue depth (nvme0n4) 00:12:56.174 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.174 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.174 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.174 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.174 fio-3.35 00:12:56.174 Starting 4 threads 00:12:57.110 00:12:57.110 job0: (groupid=0, jobs=1): err= 0: pid=70713: Fri Dec 13 09:14:50 2024 00:12:57.110 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:57.110 slat (nsec): min=13820, max=73581, avg=21286.64, stdev=6977.62 00:12:57.110 clat (usec): min=166, max=847, avg=303.81, stdev=88.17 00:12:57.110 lat (usec): min=180, max=893, avg=325.09, stdev=92.45 00:12:57.110 clat percentiles (usec): 00:12:57.110 | 1.00th=[ 172], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 212], 00:12:57.110 | 30.00th=[ 273], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 326], 00:12:57.110 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 429], 00:12:57.110 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 701], 99.95th=[ 848], 00:12:57.110 | 99.99th=[ 848] 00:12:57.110 write: IOPS=1850, BW=7401KiB/s (7578kB/s)(7408KiB/1001msec); 0 zone resets 00:12:57.110 slat (nsec): min=16309, max=85999, avg=31653.24, stdev=8082.06 00:12:57.110 clat (usec): min=116, max=2485, avg=234.19, stdev=76.31 00:12:57.110 lat (usec): min=136, max=2509, avg=265.85, stdev=78.56 00:12:57.110 clat percentiles (usec): 00:12:57.110 | 1.00th=[ 123], 5.00th=[ 139], 10.00th=[ 149], 20.00th=[ 172], 00:12:57.110 | 30.00th=[ 225], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 255], 00:12:57.110 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:12:57.110 | 99.00th=[ 363], 99.50th=[ 469], 99.90th=[ 816], 99.95th=[ 2474], 00:12:57.110 | 99.99th=[ 2474] 00:12:57.110 bw ( KiB/s): min= 8175, max= 8175, per=31.70%, avg=8175.00, stdev= 0.00, samples=1 00:12:57.110 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:12:57.110 lat (usec) : 250=41.38%, 500=56.73%, 750=1.80%, 1000=0.06% 00:12:57.110 lat (msec) : 4=0.03% 00:12:57.110 cpu : usr=2.40%, sys=6.70%, ctx=3389, majf=0, minf=9 00:12:57.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.110 issued rwts: total=1536,1852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.110 job1: (groupid=0, jobs=1): err= 0: pid=70714: Fri Dec 13 09:14:50 2024 00:12:57.110 read: IOPS=1399, BW=5598KiB/s (5733kB/s)(5604KiB/1001msec) 00:12:57.110 slat (nsec): min=15576, max=85541, avg=24882.34, stdev=9283.92 00:12:57.110 clat (usec): min=210, max=757, avg=359.22, stdev=68.98 00:12:57.110 lat (usec): min=236, max=819, avg=384.10, stdev=74.47 00:12:57.110 clat percentiles (usec): 00:12:57.110 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:12:57.110 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 347], 00:12:57.110 | 70.00th=[ 379], 80.00th=[ 420], 90.00th=[ 474], 95.00th=[ 498], 00:12:57.110 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 734], 99.95th=[ 758], 00:12:57.110 | 99.99th=[ 758] 00:12:57.110 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:57.110 slat (usec): min=20, max=106, avg=33.99, stdev= 8.70 00:12:57.110 clat (usec): min=125, max=3250, avg=261.49, stdev=93.34 00:12:57.110 lat (usec): min=150, max=3302, avg=295.47, stdev=95.62 00:12:57.110 clat percentiles (usec): 00:12:57.110 | 1.00th=[ 133], 5.00th=[ 159], 10.00th=[ 225], 20.00th=[ 239], 00:12:57.110 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:12:57.110 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 334], 00:12:57.110 | 99.00th=[ 445], 99.50th=[ 486], 99.90th=[ 889], 99.95th=[ 3261], 00:12:57.110 | 99.99th=[ 3261] 00:12:57.111 bw ( KiB/s): min= 8175, max= 8175, per=31.70%, avg=8175.00, stdev= 0.00, samples=1 00:12:57.111 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:12:57.111 lat (usec) : 250=19.51%, 500=77.94%, 750=2.45%, 1000=0.07% 00:12:57.111 lat (msec) : 4=0.03% 00:12:57.111 cpu : usr=1.80%, sys=7.00%, ctx=2938, majf=0, minf=11 00:12:57.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.111 issued rwts: total=1401,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.111 job2: (groupid=0, jobs=1): err= 0: pid=70715: Fri Dec 13 09:14:50 2024 00:12:57.111 read: IOPS=1364, BW=5459KiB/s (5590kB/s)(5464KiB/1001msec) 00:12:57.111 slat (nsec): min=15260, max=77768, avg=22821.06, stdev=6719.18 00:12:57.111 clat (usec): min=222, max=609, avg=363.63, stdev=60.88 00:12:57.111 lat (usec): min=246, max=654, avg=386.45, stdev=63.95 00:12:57.111 clat percentiles (usec): 00:12:57.111 | 1.00th=[ 281], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 318], 00:12:57.111 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:12:57.111 | 70.00th=[ 383], 80.00th=[ 424], 90.00th=[ 465], 95.00th=[ 486], 00:12:57.111 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 570], 99.95th=[ 611], 00:12:57.111 | 99.99th=[ 611] 00:12:57.111 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:57.111 slat (nsec): min=22146, max=91955, avg=34126.54, stdev=7340.38 00:12:57.111 clat (usec): min=139, max=7307, avg=267.98, stdev=238.40 00:12:57.111 lat (usec): min=162, max=7332, avg=302.11, stdev=238.88 00:12:57.111 clat percentiles (usec): 00:12:57.111 | 1.00th=[ 149], 5.00th=[ 169], 10.00th=[ 217], 20.00th=[ 237], 00:12:57.111 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:12:57.111 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 306], 00:12:57.111 | 99.00th=[ 457], 99.50th=[ 635], 99.90th=[ 3851], 99.95th=[ 7308], 00:12:57.111 | 99.99th=[ 7308] 00:12:57.111 bw ( KiB/s): min= 8175, max= 8175, per=31.70%, avg=8175.00, stdev= 0.00, samples=1 00:12:57.111 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:12:57.111 lat (usec) : 250=20.68%, 500=77.67%, 750=1.45% 00:12:57.111 lat (msec) : 2=0.03%, 4=0.14%, 10=0.03% 00:12:57.111 cpu : usr=1.50%, sys=7.10%, ctx=2903, majf=0, minf=9 00:12:57.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.111 issued rwts: total=1366,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.111 job3: (groupid=0, jobs=1): err= 0: pid=70716: Fri Dec 13 09:14:50 2024 00:12:57.111 read: IOPS=1500, BW=6000KiB/s (6144kB/s)(6012KiB/1002msec) 00:12:57.111 slat (nsec): min=12416, max=87842, avg=20356.44, stdev=6207.65 00:12:57.111 clat (usec): min=212, max=1228, avg=356.64, stdev=76.72 00:12:57.111 lat (usec): min=231, max=1243, avg=377.00, stdev=76.65 00:12:57.111 clat percentiles (usec): 00:12:57.111 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 306], 00:12:57.111 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 338], 00:12:57.111 | 70.00th=[ 363], 80.00th=[ 424], 90.00th=[ 474], 95.00th=[ 506], 00:12:57.111 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 725], 99.95th=[ 1237], 00:12:57.111 | 99.99th=[ 1237] 00:12:57.111 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:12:57.111 slat (usec): min=22, max=132, avg=32.52, stdev= 6.56 00:12:57.111 clat (usec): min=131, max=387, avg=245.05, stdev=32.88 00:12:57.111 lat (usec): min=157, max=443, avg=277.56, stdev=33.39 00:12:57.111 clat percentiles (usec): 00:12:57.111 | 1.00th=[ 151], 5.00th=[ 172], 10.00th=[ 196], 20.00th=[ 227], 00:12:57.111 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:12:57.111 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:12:57.111 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 379], 99.95th=[ 388], 00:12:57.111 | 99.99th=[ 388] 00:12:57.111 bw ( KiB/s): min= 8175, max= 8175, per=31.70%, avg=8175.00, stdev= 0.00, samples=1 00:12:57.111 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:12:57.111 lat (usec) : 250=24.32%, 500=72.85%, 750=2.80% 00:12:57.111 lat (msec) : 2=0.03% 00:12:57.111 cpu : usr=1.30%, sys=6.89%, ctx=3041, majf=0, minf=7 00:12:57.111 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.111 issued rwts: total=1503,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.111 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.111 00:12:57.111 Run status group 0 (all jobs): 00:12:57.111 READ: bw=22.6MiB/s (23.7MB/s), 5459KiB/s-6138KiB/s (5590kB/s-6285kB/s), io=22.7MiB (23.8MB), run=1001-1002msec 00:12:57.111 WRITE: bw=25.2MiB/s (26.4MB/s), 6132KiB/s-7401KiB/s (6279kB/s-7578kB/s), io=25.2MiB (26.5MB), run=1001-1002msec 00:12:57.111 00:12:57.111 Disk stats (read/write): 00:12:57.111 nvme0n1: ios=1228/1536, merge=0/0, ticks=432/399, in_queue=831, util=88.57% 00:12:57.111 nvme0n2: ios=1155/1536, merge=0/0, ticks=410/423, in_queue=833, util=88.25% 00:12:57.111 nvme0n3: ios=1073/1536, merge=0/0, ticks=374/419, in_queue=793, util=88.12% 00:12:57.111 nvme0n4: ios=1210/1536, merge=0/0, ticks=412/394, in_queue=806, util=89.71% 00:12:57.111 09:14:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:57.111 [global] 00:12:57.111 thread=1 00:12:57.111 invalidate=1 00:12:57.111 rw=randwrite 00:12:57.111 time_based=1 00:12:57.111 runtime=1 00:12:57.111 ioengine=libaio 00:12:57.111 direct=1 00:12:57.111 bs=4096 00:12:57.111 iodepth=1 00:12:57.111 norandommap=0 00:12:57.111 numjobs=1 00:12:57.111 00:12:57.111 verify_dump=1 00:12:57.111 verify_backlog=512 00:12:57.111 verify_state_save=0 00:12:57.111 do_verify=1 00:12:57.111 verify=crc32c-intel 00:12:57.111 [job0] 00:12:57.111 filename=/dev/nvme0n1 00:12:57.111 [job1] 00:12:57.111 filename=/dev/nvme0n2 00:12:57.111 [job2] 00:12:57.111 filename=/dev/nvme0n3 00:12:57.111 [job3] 00:12:57.111 filename=/dev/nvme0n4 00:12:57.370 Could not set queue depth (nvme0n1) 00:12:57.370 Could not set queue depth (nvme0n2) 00:12:57.370 Could not set queue depth (nvme0n3) 00:12:57.370 Could not set queue depth (nvme0n4) 00:12:57.370 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.370 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.370 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.370 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:57.370 fio-3.35 00:12:57.370 Starting 4 threads 00:12:58.745 00:12:58.745 job0: (groupid=0, jobs=1): err= 0: pid=70775: Fri Dec 13 09:14:52 2024 00:12:58.745 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:58.745 slat (nsec): min=10657, max=75358, avg=20056.78, stdev=7406.47 00:12:58.745 clat (usec): min=184, max=2577, avg=339.56, stdev=97.24 00:12:58.745 lat (usec): min=208, max=2599, avg=359.62, stdev=100.01 00:12:58.745 clat percentiles (usec): 00:12:58.745 | 1.00th=[ 202], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 297], 00:12:58.745 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:12:58.745 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 408], 95.00th=[ 529], 00:12:58.745 | 99.00th=[ 660], 99.50th=[ 685], 99.90th=[ 816], 99.95th=[ 2573], 00:12:58.745 | 99.99th=[ 2573] 00:12:58.745 write: IOPS=1578, BW=6314KiB/s (6465kB/s)(6320KiB/1001msec); 0 zone resets 00:12:58.745 slat (usec): min=14, max=115, avg=30.22, stdev= 6.85 00:12:58.745 clat (usec): min=136, max=961, avg=247.96, stdev=48.90 00:12:58.745 lat (usec): min=163, max=993, avg=278.18, stdev=49.19 00:12:58.745 clat percentiles (usec): 00:12:58.745 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 178], 20.00th=[ 217], 00:12:58.745 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 262], 00:12:58.745 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:12:58.745 | 99.00th=[ 338], 99.50th=[ 379], 99.90th=[ 594], 99.95th=[ 963], 00:12:58.745 | 99.99th=[ 963] 00:12:58.745 bw ( KiB/s): min= 8192, max= 8192, per=32.51%, avg=8192.00, stdev= 0.00, samples=1 00:12:58.745 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:58.745 lat (usec) : 250=24.20%, 500=73.17%, 750=2.54%, 1000=0.06% 00:12:58.745 lat (msec) : 4=0.03% 00:12:58.745 cpu : usr=2.00%, sys=6.20%, ctx=3116, majf=0, minf=11 00:12:58.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.745 issued rwts: total=1536,1580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.745 job1: (groupid=0, jobs=1): err= 0: pid=70776: Fri Dec 13 09:14:52 2024 00:12:58.745 read: IOPS=1495, BW=5982KiB/s (6126kB/s)(5988KiB/1001msec) 00:12:58.745 slat (nsec): min=14982, max=69498, avg=21537.30, stdev=7613.20 00:12:58.745 clat (usec): min=206, max=661, avg=339.68, stdev=65.41 00:12:58.745 lat (usec): min=225, max=684, avg=361.22, stdev=68.62 00:12:58.745 clat percentiles (usec): 00:12:58.745 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:12:58.745 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 326], 00:12:58.745 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 461], 95.00th=[ 494], 00:12:58.745 | 99.00th=[ 553], 99.50th=[ 611], 99.90th=[ 644], 99.95th=[ 660], 00:12:58.745 | 99.99th=[ 660] 00:12:58.745 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:58.745 slat (usec): min=20, max=113, avg=33.38, stdev= 9.54 00:12:58.745 clat (usec): min=122, max=2561, avg=260.17, stdev=92.42 00:12:58.745 lat (usec): min=144, max=2605, avg=293.55, stdev=96.08 00:12:58.745 clat percentiles (usec): 00:12:58.745 | 1.00th=[ 133], 5.00th=[ 143], 10.00th=[ 165], 20.00th=[ 229], 00:12:58.745 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 265], 00:12:58.745 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 420], 00:12:58.745 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 1004], 99.95th=[ 2573], 00:12:58.745 | 99.99th=[ 2573] 00:12:58.745 bw ( KiB/s): min= 8192, max= 8192, per=32.51%, avg=8192.00, stdev= 0.00, samples=1 00:12:58.745 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:58.745 lat (usec) : 250=21.83%, 500=75.96%, 750=2.08%, 1000=0.10% 00:12:58.745 lat (msec) : 4=0.03% 00:12:58.745 cpu : usr=2.40%, sys=6.20%, ctx=3035, majf=0, minf=14 00:12:58.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.745 issued rwts: total=1497,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.745 job2: (groupid=0, jobs=1): err= 0: pid=70777: Fri Dec 13 09:14:52 2024 00:12:58.745 read: IOPS=1467, BW=5870KiB/s (6011kB/s)(5876KiB/1001msec) 00:12:58.745 slat (nsec): min=10740, max=74884, avg=18560.38, stdev=6692.35 00:12:58.745 clat (usec): min=180, max=2662, avg=336.74, stdev=87.57 00:12:58.745 lat (usec): min=195, max=2682, avg=355.30, stdev=89.72 00:12:58.745 clat percentiles (usec): 00:12:58.745 | 1.00th=[ 262], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:12:58.745 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:12:58.745 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 404], 95.00th=[ 482], 00:12:58.745 | 99.00th=[ 545], 99.50th=[ 611], 99.90th=[ 1319], 99.95th=[ 2671], 00:12:58.745 | 99.99th=[ 2671] 00:12:58.745 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:58.745 slat (nsec): min=13437, max=88621, avg=30821.91, stdev=10613.59 00:12:58.745 clat (usec): min=137, max=3337, avg=275.46, stdev=151.53 00:12:58.745 lat (usec): min=166, max=3360, avg=306.28, stdev=153.73 00:12:58.745 clat percentiles (usec): 00:12:58.745 | 1.00th=[ 149], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 221], 00:12:58.746 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 269], 00:12:58.746 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 400], 95.00th=[ 445], 00:12:58.746 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 3228], 99.95th=[ 3326], 00:12:58.746 | 99.99th=[ 3326] 00:12:58.746 bw ( KiB/s): min= 7208, max= 7208, per=28.60%, avg=7208.00, stdev= 0.00, samples=1 00:12:58.746 iops : min= 1802, max= 1802, avg=1802.00, stdev= 0.00, samples=1 00:12:58.746 lat (usec) : 250=19.50%, 500=78.50%, 750=1.76%, 1000=0.07% 00:12:58.746 lat (msec) : 2=0.03%, 4=0.13% 00:12:58.746 cpu : usr=1.00%, sys=6.80%, ctx=3006, majf=0, minf=14 00:12:58.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.746 issued rwts: total=1469,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.746 job3: (groupid=0, jobs=1): err= 0: pid=70778: Fri Dec 13 09:14:52 2024 00:12:58.746 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:58.746 slat (nsec): min=14971, max=65609, avg=20887.74, stdev=5646.94 00:12:58.746 clat (usec): min=209, max=2749, avg=331.69, stdev=89.72 00:12:58.746 lat (usec): min=227, max=2771, avg=352.58, stdev=91.82 00:12:58.746 clat percentiles (usec): 00:12:58.746 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 297], 00:12:58.746 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 322], 00:12:58.746 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 515], 00:12:58.746 | 99.00th=[ 611], 99.50th=[ 619], 99.90th=[ 652], 99.95th=[ 2737], 00:12:58.746 | 99.99th=[ 2737] 00:12:58.746 write: IOPS=1652, BW=6609KiB/s (6768kB/s)(6616KiB/1001msec); 0 zone resets 00:12:58.746 slat (nsec): min=19275, max=86282, avg=31455.80, stdev=6812.19 00:12:58.746 clat (usec): min=133, max=823, avg=240.78, stdev=39.20 00:12:58.746 lat (usec): min=155, max=852, avg=272.23, stdev=40.37 00:12:58.746 clat percentiles (usec): 00:12:58.746 | 1.00th=[ 149], 5.00th=[ 169], 10.00th=[ 192], 20.00th=[ 212], 00:12:58.746 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:12:58.746 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:12:58.746 | 99.00th=[ 322], 99.50th=[ 371], 99.90th=[ 502], 99.95th=[ 824], 00:12:58.746 | 99.99th=[ 824] 00:12:58.746 bw ( KiB/s): min= 8192, max= 8192, per=32.51%, avg=8192.00, stdev= 0.00, samples=1 00:12:58.746 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:58.746 lat (usec) : 250=29.50%, 500=67.93%, 750=2.51%, 1000=0.03% 00:12:58.746 lat (msec) : 4=0.03% 00:12:58.746 cpu : usr=1.90%, sys=6.80%, ctx=3191, majf=0, minf=9 00:12:58.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.746 issued rwts: total=1536,1654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.746 00:12:58.746 Run status group 0 (all jobs): 00:12:58.746 READ: bw=23.6MiB/s (24.7MB/s), 5870KiB/s-6138KiB/s (6011kB/s-6285kB/s), io=23.6MiB (24.7MB), run=1001-1001msec 00:12:58.746 WRITE: bw=24.6MiB/s (25.8MB/s), 6138KiB/s-6609KiB/s (6285kB/s-6768kB/s), io=24.6MiB (25.8MB), run=1001-1001msec 00:12:58.746 00:12:58.746 Disk stats (read/write): 00:12:58.746 nvme0n1: ios=1202/1536, merge=0/0, ticks=428/407, in_queue=835, util=88.18% 00:12:58.746 nvme0n2: ios=1156/1536, merge=0/0, ticks=413/416, in_queue=829, util=88.46% 00:12:58.746 nvme0n3: ios=1047/1536, merge=0/0, ticks=359/412, in_queue=771, util=88.19% 00:12:58.746 nvme0n4: ios=1206/1536, merge=0/0, ticks=411/395, in_queue=806, util=89.77% 00:12:58.746 09:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:58.746 [global] 00:12:58.746 thread=1 00:12:58.746 invalidate=1 00:12:58.746 rw=write 00:12:58.746 time_based=1 00:12:58.746 runtime=1 00:12:58.746 ioengine=libaio 00:12:58.746 direct=1 00:12:58.746 bs=4096 00:12:58.746 iodepth=128 00:12:58.746 norandommap=0 00:12:58.746 numjobs=1 00:12:58.746 00:12:58.746 verify_dump=1 00:12:58.746 verify_backlog=512 00:12:58.746 verify_state_save=0 00:12:58.746 do_verify=1 00:12:58.746 verify=crc32c-intel 00:12:58.746 [job0] 00:12:58.746 filename=/dev/nvme0n1 00:12:58.746 [job1] 00:12:58.746 filename=/dev/nvme0n2 00:12:58.746 [job2] 00:12:58.746 filename=/dev/nvme0n3 00:12:58.746 [job3] 00:12:58.746 filename=/dev/nvme0n4 00:12:58.746 Could not set queue depth (nvme0n1) 00:12:58.746 Could not set queue depth (nvme0n2) 00:12:58.746 Could not set queue depth (nvme0n3) 00:12:58.746 Could not set queue depth (nvme0n4) 00:12:58.746 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:58.746 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:58.746 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:58.746 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:58.746 fio-3.35 00:12:58.746 Starting 4 threads 00:13:00.124 00:13:00.124 job0: (groupid=0, jobs=1): err= 0: pid=70833: Fri Dec 13 09:14:53 2024 00:13:00.124 read: IOPS=4845, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1003msec) 00:13:00.124 slat (usec): min=5, max=5468, avg=100.85, stdev=443.29 00:13:00.124 clat (usec): min=1251, max=19060, avg=12984.41, stdev=1549.83 00:13:00.124 lat (usec): min=5113, max=19096, avg=13085.26, stdev=1553.02 00:13:00.124 clat percentiles (usec): 00:13:00.124 | 1.00th=[ 6128], 5.00th=[10290], 10.00th=[11600], 20.00th=[12387], 00:13:00.124 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:13:00.124 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14353], 95.00th=[15139], 00:13:00.124 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:13:00.124 | 99.99th=[19006] 00:13:00.124 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:13:00.124 slat (usec): min=12, max=5607, avg=91.75, stdev=483.20 00:13:00.124 clat (usec): min=5570, max=19455, avg=12424.79, stdev=1509.43 00:13:00.124 lat (usec): min=5597, max=19473, avg=12516.53, stdev=1573.15 00:13:00.124 clat percentiles (usec): 00:13:00.124 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[11207], 20.00th=[11469], 00:13:00.124 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:13:00.124 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14091], 95.00th=[15008], 00:13:00.124 | 99.00th=[17695], 99.50th=[18744], 99.90th=[19268], 99.95th=[19530], 00:13:00.124 | 99.99th=[19530] 00:13:00.124 bw ( KiB/s): min=20480, max=20521, per=34.65%, avg=20500.50, stdev=28.99, samples=2 00:13:00.124 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:13:00.124 lat (msec) : 2=0.01%, 10=3.64%, 20=96.35% 00:13:00.124 cpu : usr=4.69%, sys=14.07%, ctx=408, majf=0, minf=1 00:13:00.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:00.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.124 issued rwts: total=4860,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.124 job1: (groupid=0, jobs=1): err= 0: pid=70834: Fri Dec 13 09:14:53 2024 00:13:00.124 read: IOPS=2295, BW=9183KiB/s (9404kB/s)(9220KiB/1004msec) 00:13:00.124 slat (usec): min=4, max=6938, avg=206.48, stdev=1060.31 00:13:00.124 clat (usec): min=835, max=28898, avg=25958.16, stdev=3043.60 00:13:00.124 lat (usec): min=7355, max=28913, avg=26164.64, stdev=2857.14 00:13:00.124 clat percentiles (usec): 00:13:00.124 | 1.00th=[ 7701], 5.00th=[21103], 10.00th=[25035], 20.00th=[25822], 00:13:00.124 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:13:00.124 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28181], 95.00th=[28181], 00:13:00.124 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:13:00.124 | 99.99th=[28967] 00:13:00.124 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:13:00.124 slat (usec): min=10, max=9546, avg=197.77, stdev=987.03 00:13:00.124 clat (usec): min=18951, max=30308, avg=25853.51, stdev=1461.47 00:13:00.124 lat (usec): min=21065, max=30424, avg=26051.28, stdev=1076.06 00:13:00.124 clat percentiles (usec): 00:13:00.124 | 1.00th=[19792], 5.00th=[24511], 10.00th=[24773], 20.00th=[25297], 00:13:00.124 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:13:00.124 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27395], 95.00th=[28181], 00:13:00.124 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:13:00.124 | 99.99th=[30278] 00:13:00.124 bw ( KiB/s): min= 9976, max=10504, per=17.31%, avg=10240.00, stdev=373.35, samples=2 00:13:00.124 iops : min= 2494, max= 2626, avg=2560.00, stdev=93.34, samples=2 00:13:00.124 lat (usec) : 1000=0.02% 00:13:00.124 lat (msec) : 10=0.66%, 20=1.44%, 50=97.88% 00:13:00.124 cpu : usr=2.59%, sys=7.38%, ctx=153, majf=0, minf=4 00:13:00.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:00.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.124 issued rwts: total=2305,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.124 job2: (groupid=0, jobs=1): err= 0: pid=70835: Fri Dec 13 09:14:53 2024 00:13:00.124 read: IOPS=2300, BW=9202KiB/s (9422kB/s)(9220KiB/1002msec) 00:13:00.124 slat (usec): min=4, max=7118, avg=206.14, stdev=1058.19 00:13:00.124 clat (usec): min=1108, max=29151, avg=26007.28, stdev=3102.71 00:13:00.124 lat (usec): min=7177, max=29166, avg=26213.42, stdev=2921.64 00:13:00.124 clat percentiles (usec): 00:13:00.124 | 1.00th=[ 7504], 5.00th=[20841], 10.00th=[25297], 20.00th=[25822], 00:13:00.124 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:13:00.124 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28181], 95.00th=[28443], 00:13:00.124 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:13:00.124 | 99.99th=[29230] 00:13:00.124 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:13:00.124 slat (usec): min=10, max=7421, avg=198.03, stdev=989.66 00:13:00.124 clat (usec): min=18620, max=28906, avg=25779.69, stdev=1431.76 00:13:00.124 lat (usec): min=20160, max=28992, avg=25977.72, stdev=1033.51 00:13:00.124 clat percentiles (usec): 00:13:00.124 | 1.00th=[19792], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:13:00.124 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[25822], 00:13:00.124 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27395], 95.00th=[28181], 00:13:00.124 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:13:00.124 | 99.99th=[28967] 00:13:00.124 bw ( KiB/s): min= 9976, max=10504, per=17.31%, avg=10240.00, stdev=373.35, samples=2 00:13:00.124 iops : min= 2494, max= 2626, avg=2560.00, stdev=93.34, samples=2 00:13:00.124 lat (msec) : 2=0.02%, 10=0.66%, 20=1.56%, 50=97.76% 00:13:00.124 cpu : usr=1.90%, sys=6.59%, ctx=153, majf=0, minf=3 00:13:00.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:00.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.124 issued rwts: total=2305,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.124 job3: (groupid=0, jobs=1): err= 0: pid=70836: Fri Dec 13 09:14:53 2024 00:13:00.124 read: IOPS=4343, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1001msec) 00:13:00.124 slat (usec): min=4, max=3906, avg=108.12, stdev=430.66 00:13:00.124 clat (usec): min=691, max=18852, avg=14270.60, stdev=1570.34 00:13:00.124 lat (usec): min=723, max=18888, avg=14378.72, stdev=1605.01 00:13:00.124 clat percentiles (usec): 00:13:00.124 | 1.00th=[ 5669], 5.00th=[12256], 10.00th=[13304], 20.00th=[13829], 00:13:00.124 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:13:00.124 | 70.00th=[14615], 80.00th=[15139], 90.00th=[15795], 95.00th=[16319], 00:13:00.124 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:13:00.124 | 99.99th=[18744] 00:13:00.124 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:13:00.124 slat (usec): min=12, max=4128, avg=106.72, stdev=500.74 00:13:00.124 clat (usec): min=10876, max=18693, avg=13973.03, stdev=1077.98 00:13:00.124 lat (usec): min=10909, max=18742, avg=14079.75, stdev=1174.04 00:13:00.124 clat percentiles (usec): 00:13:00.124 | 1.00th=[11469], 5.00th=[12780], 10.00th=[12911], 20.00th=[13173], 00:13:00.124 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13829], 60.00th=[13960], 00:13:00.124 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15401], 95.00th=[16450], 00:13:00.124 | 99.00th=[17433], 99.50th=[17957], 99.90th=[18220], 99.95th=[18482], 00:13:00.124 | 99.99th=[18744] 00:13:00.124 bw ( KiB/s): min=17912, max=18989, per=31.19%, avg=18450.50, stdev=761.55, samples=2 00:13:00.124 iops : min= 4478, max= 4747, avg=4612.50, stdev=190.21, samples=2 00:13:00.124 lat (usec) : 750=0.02% 00:13:00.124 lat (msec) : 4=0.22%, 10=0.54%, 20=99.22% 00:13:00.124 cpu : usr=4.10%, sys=13.70%, ctx=368, majf=0, minf=1 00:13:00.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:00.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.125 issued rwts: total=4348,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.125 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.125 00:13:00.125 Run status group 0 (all jobs): 00:13:00.125 READ: bw=53.8MiB/s (56.4MB/s), 9183KiB/s-18.9MiB/s (9404kB/s-19.8MB/s), io=54.0MiB (56.6MB), run=1001-1004msec 00:13:00.125 WRITE: bw=57.8MiB/s (60.6MB/s), 9.96MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=58.0MiB (60.8MB), run=1001-1004msec 00:13:00.125 00:13:00.125 Disk stats (read/write): 00:13:00.125 nvme0n1: ios=4146/4508, merge=0/0, ticks=25997/23138, in_queue=49135, util=88.58% 00:13:00.125 nvme0n2: ios=2092/2112, merge=0/0, ticks=12794/12548, in_queue=25342, util=88.46% 00:13:00.125 nvme0n3: ios=2048/2112, merge=0/0, ticks=11576/10895, in_queue=22471, util=88.96% 00:13:00.125 nvme0n4: ios=3614/4096, merge=0/0, ticks=16516/16223, in_queue=32739, util=89.72% 00:13:00.125 09:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:00.125 [global] 00:13:00.125 thread=1 00:13:00.125 invalidate=1 00:13:00.125 rw=randwrite 00:13:00.125 time_based=1 00:13:00.125 runtime=1 00:13:00.125 ioengine=libaio 00:13:00.125 direct=1 00:13:00.125 bs=4096 00:13:00.125 iodepth=128 00:13:00.125 norandommap=0 00:13:00.125 numjobs=1 00:13:00.125 00:13:00.125 verify_dump=1 00:13:00.125 verify_backlog=512 00:13:00.125 verify_state_save=0 00:13:00.125 do_verify=1 00:13:00.125 verify=crc32c-intel 00:13:00.125 [job0] 00:13:00.125 filename=/dev/nvme0n1 00:13:00.125 [job1] 00:13:00.125 filename=/dev/nvme0n2 00:13:00.125 [job2] 00:13:00.125 filename=/dev/nvme0n3 00:13:00.125 [job3] 00:13:00.125 filename=/dev/nvme0n4 00:13:00.125 Could not set queue depth (nvme0n1) 00:13:00.125 Could not set queue depth (nvme0n2) 00:13:00.125 Could not set queue depth (nvme0n3) 00:13:00.125 Could not set queue depth (nvme0n4) 00:13:00.125 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:00.125 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:00.125 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:00.125 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:00.125 fio-3.35 00:13:00.125 Starting 4 threads 00:13:01.500 00:13:01.500 job0: (groupid=0, jobs=1): err= 0: pid=70889: Fri Dec 13 09:14:55 2024 00:13:01.500 read: IOPS=4988, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:13:01.500 slat (usec): min=7, max=6444, avg=93.34, stdev=582.07 00:13:01.500 clat (usec): min=1480, max=21255, avg=13043.24, stdev=1610.33 00:13:01.500 lat (usec): min=2429, max=25419, avg=13136.58, stdev=1634.04 00:13:01.500 clat percentiles (usec): 00:13:01.500 | 1.00th=[ 7832], 5.00th=[ 9765], 10.00th=[12125], 20.00th=[12518], 00:13:01.500 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:13:01.500 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14091], 95.00th=[14353], 00:13:01.500 | 99.00th=[19792], 99.50th=[20317], 99.90th=[21103], 99.95th=[21103], 00:13:01.500 | 99.99th=[21365] 00:13:01.500 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:13:01.500 slat (usec): min=7, max=10582, avg=95.90, stdev=558.27 00:13:01.500 clat (usec): min=6359, max=18420, avg=12081.89, stdev=1258.68 00:13:01.500 lat (usec): min=8405, max=18438, avg=12177.79, stdev=1156.59 00:13:01.500 clat percentiles (usec): 00:13:01.500 | 1.00th=[ 8029], 5.00th=[10683], 10.00th=[10945], 20.00th=[11338], 00:13:01.500 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:13:01.500 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13435], 00:13:01.500 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:13:01.500 | 99.99th=[18482] 00:13:01.500 bw ( KiB/s): min=20480, max=20480, per=34.76%, avg=20480.00, stdev= 0.00, samples=2 00:13:01.500 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:13:01.500 lat (msec) : 2=0.01%, 4=0.16%, 10=3.70%, 20=95.65%, 50=0.47% 00:13:01.500 cpu : usr=5.09%, sys=14.17%, ctx=207, majf=0, minf=13 00:13:01.500 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:01.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:01.500 issued rwts: total=5003,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:01.500 job1: (groupid=0, jobs=1): err= 0: pid=70890: Fri Dec 13 09:14:55 2024 00:13:01.500 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:13:01.500 slat (usec): min=6, max=14438, avg=159.02, stdev=820.39 00:13:01.500 clat (usec): min=11853, max=44148, avg=20050.47, stdev=5096.99 00:13:01.500 lat (usec): min=11873, max=44192, avg=20209.49, stdev=5148.22 00:13:01.500 clat percentiles (usec): 00:13:01.500 | 1.00th=[13042], 5.00th=[15270], 10.00th=[16319], 20.00th=[17171], 00:13:01.500 | 30.00th=[17433], 40.00th=[17695], 50.00th=[18220], 60.00th=[18744], 00:13:01.500 | 70.00th=[19268], 80.00th=[23200], 90.00th=[26870], 95.00th=[30540], 00:13:01.500 | 99.00th=[37487], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:13:01.500 | 99.99th=[44303] 00:13:01.500 write: IOPS=2898, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1010msec); 0 zone resets 00:13:01.500 slat (usec): min=12, max=9622, avg=193.77, stdev=868.01 00:13:01.500 clat (usec): min=8434, max=77131, avg=25965.01, stdev=16811.71 00:13:01.500 lat (usec): min=9079, max=77176, avg=26158.77, stdev=16933.38 00:13:01.500 clat percentiles (usec): 00:13:01.500 | 1.00th=[12256], 5.00th=[13173], 10.00th=[13566], 20.00th=[13829], 00:13:01.500 | 30.00th=[14091], 40.00th=[15533], 50.00th=[17171], 60.00th=[21365], 00:13:01.500 | 70.00th=[24773], 80.00th=[44827], 90.00th=[51119], 95.00th=[61604], 00:13:01.500 | 99.00th=[74974], 99.50th=[74974], 99.90th=[77071], 99.95th=[77071], 00:13:01.500 | 99.99th=[77071] 00:13:01.500 bw ( KiB/s): min= 9800, max=12600, per=19.01%, avg=11200.00, stdev=1979.90, samples=2 00:13:01.500 iops : min= 2450, max= 3150, avg=2800.00, stdev=494.97, samples=2 00:13:01.501 lat (msec) : 10=0.09%, 20=63.82%, 50=28.49%, 100=7.60% 00:13:01.501 cpu : usr=2.68%, sys=9.22%, ctx=237, majf=0, minf=3 00:13:01.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:01.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:01.501 issued rwts: total=2560,2927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:01.501 job2: (groupid=0, jobs=1): err= 0: pid=70891: Fri Dec 13 09:14:55 2024 00:13:01.501 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:13:01.501 slat (usec): min=5, max=7565, avg=174.18, stdev=741.35 00:13:01.501 clat (usec): min=15047, max=60185, avg=23264.88, stdev=7951.46 00:13:01.501 lat (usec): min=15070, max=61388, avg=23439.07, stdev=8017.44 00:13:01.501 clat percentiles (usec): 00:13:01.501 | 1.00th=[15533], 5.00th=[17695], 10.00th=[18220], 20.00th=[18744], 00:13:01.501 | 30.00th=[19006], 40.00th=[19006], 50.00th=[19530], 60.00th=[21365], 00:13:01.501 | 70.00th=[24511], 80.00th=[26608], 90.00th=[31065], 95.00th=[42730], 00:13:01.501 | 99.00th=[55837], 99.50th=[57410], 99.90th=[58983], 99.95th=[60031], 00:13:01.501 | 99.99th=[60031] 00:13:01.501 write: IOPS=2209, BW=8840KiB/s (9052kB/s)(8884KiB/1005msec); 0 zone resets 00:13:01.501 slat (usec): min=13, max=10086, avg=281.42, stdev=1106.18 00:13:01.501 clat (usec): min=889, max=83634, avg=35379.07, stdev=20154.49 00:13:01.501 lat (usec): min=4310, max=83660, avg=35660.49, stdev=20287.74 00:13:01.501 clat percentiles (usec): 00:13:01.501 | 1.00th=[ 4752], 5.00th=[14484], 10.00th=[15139], 20.00th=[15926], 00:13:01.501 | 30.00th=[19792], 40.00th=[26346], 50.00th=[31851], 60.00th=[35390], 00:13:01.501 | 70.00th=[44827], 80.00th=[50594], 90.00th=[66847], 95.00th=[79168], 00:13:01.501 | 99.00th=[81265], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:13:01.501 | 99.99th=[83362] 00:13:01.501 bw ( KiB/s): min= 8192, max= 8552, per=14.21%, avg=8372.00, stdev=254.56, samples=2 00:13:01.501 iops : min= 2048, max= 2138, avg=2093.00, stdev=63.64, samples=2 00:13:01.501 lat (usec) : 1000=0.02% 00:13:01.501 lat (msec) : 10=1.66%, 20=41.81%, 50=44.55%, 100=11.95% 00:13:01.501 cpu : usr=2.59%, sys=7.47%, ctx=256, majf=0, minf=19 00:13:01.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:13:01.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:01.501 issued rwts: total=2048,2221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:01.501 job3: (groupid=0, jobs=1): err= 0: pid=70892: Fri Dec 13 09:14:55 2024 00:13:01.501 read: IOPS=4406, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1004msec) 00:13:01.501 slat (usec): min=5, max=12877, avg=107.27, stdev=653.57 00:13:01.501 clat (usec): min=1869, max=28692, avg=14710.30, stdev=2657.66 00:13:01.501 lat (usec): min=5743, max=28740, avg=14817.56, stdev=2664.22 00:13:01.501 clat percentiles (usec): 00:13:01.501 | 1.00th=[ 7504], 5.00th=[10028], 10.00th=[13173], 20.00th=[13829], 00:13:01.501 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14615], 60.00th=[14746], 00:13:01.501 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15926], 95.00th=[20317], 00:13:01.501 | 99.00th=[25560], 99.50th=[26608], 99.90th=[27919], 99.95th=[27919], 00:13:01.501 | 99.99th=[28705] 00:13:01.501 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:13:01.501 slat (usec): min=5, max=10414, avg=106.06, stdev=624.43 00:13:01.501 clat (usec): min=3423, max=27744, avg=13469.57, stdev=1911.20 00:13:01.501 lat (usec): min=3446, max=27754, avg=13575.63, stdev=1832.03 00:13:01.501 clat percentiles (usec): 00:13:01.501 | 1.00th=[ 4817], 5.00th=[10945], 10.00th=[11863], 20.00th=[12649], 00:13:01.501 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13566], 60.00th=[13829], 00:13:01.501 | 70.00th=[14091], 80.00th=[14353], 90.00th=[15533], 95.00th=[15926], 00:13:01.501 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:13:01.501 | 99.99th=[27657] 00:13:01.501 bw ( KiB/s): min=18416, max=18448, per=31.29%, avg=18432.00, stdev=22.63, samples=2 00:13:01.501 iops : min= 4604, max= 4612, avg=4608.00, stdev= 5.66, samples=2 00:13:01.501 lat (msec) : 2=0.01%, 4=0.23%, 10=4.33%, 20=92.81%, 50=2.61% 00:13:01.501 cpu : usr=4.29%, sys=12.76%, ctx=263, majf=0, minf=15 00:13:01.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:01.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:01.501 issued rwts: total=4424,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:01.501 00:13:01.501 Run status group 0 (all jobs): 00:13:01.501 READ: bw=54.3MiB/s (56.9MB/s), 8151KiB/s-19.5MiB/s (8347kB/s-20.4MB/s), io=54.8MiB (57.5MB), run=1003-1010msec 00:13:01.501 WRITE: bw=57.5MiB/s (60.3MB/s), 8840KiB/s-19.9MiB/s (9052kB/s-20.9MB/s), io=58.1MiB (60.9MB), run=1003-1010msec 00:13:01.501 00:13:01.501 Disk stats (read/write): 00:13:01.501 nvme0n1: ios=4146/4480, merge=0/0, ticks=50759/49434, in_queue=100193, util=87.88% 00:13:01.501 nvme0n2: ios=2348/2560, merge=0/0, ticks=23265/27082, in_queue=50347, util=88.47% 00:13:01.501 nvme0n3: ios=1536/1879, merge=0/0, ticks=11644/22581, in_queue=34225, util=88.54% 00:13:01.501 nvme0n4: ios=3584/4095, merge=0/0, ticks=50435/51499, in_queue=101934, util=89.58% 00:13:01.501 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:01.501 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70911 00:13:01.501 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:01.501 09:14:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:01.501 [global] 00:13:01.501 thread=1 00:13:01.501 invalidate=1 00:13:01.501 rw=read 00:13:01.501 time_based=1 00:13:01.501 runtime=10 00:13:01.501 ioengine=libaio 00:13:01.501 direct=1 00:13:01.501 bs=4096 00:13:01.501 iodepth=1 00:13:01.501 norandommap=1 00:13:01.501 numjobs=1 00:13:01.501 00:13:01.501 [job0] 00:13:01.501 filename=/dev/nvme0n1 00:13:01.501 [job1] 00:13:01.501 filename=/dev/nvme0n2 00:13:01.501 [job2] 00:13:01.501 filename=/dev/nvme0n3 00:13:01.501 [job3] 00:13:01.501 filename=/dev/nvme0n4 00:13:01.501 Could not set queue depth (nvme0n1) 00:13:01.501 Could not set queue depth (nvme0n2) 00:13:01.501 Could not set queue depth (nvme0n3) 00:13:01.501 Could not set queue depth (nvme0n4) 00:13:01.501 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:01.501 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:01.501 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:01.501 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:01.501 fio-3.35 00:13:01.501 Starting 4 threads 00:13:04.785 09:14:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:04.785 fio: pid=70959, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:04.785 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33136640, buflen=4096 00:13:04.785 09:14:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:04.785 fio: pid=70958, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:04.785 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=36560896, buflen=4096 00:13:04.785 09:14:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:04.785 09:14:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:05.044 fio: pid=70956, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:05.044 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=62095360, buflen=4096 00:13:05.302 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:05.302 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:05.562 fio: pid=70957, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:05.562 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=6295552, buflen=4096 00:13:05.562 00:13:05.562 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70956: Fri Dec 13 09:14:59 2024 00:13:05.562 read: IOPS=4377, BW=17.1MiB/s (17.9MB/s)(59.2MiB/3463msec) 00:13:05.562 slat (usec): min=8, max=15329, avg=17.70, stdev=181.35 00:13:05.562 clat (usec): min=53, max=3802, avg=209.24, stdev=61.09 00:13:05.562 lat (usec): min=165, max=15597, avg=226.94, stdev=192.96 00:13:05.562 clat percentiles (usec): 00:13:05.562 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:13:05.562 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:13:05.562 | 70.00th=[ 210], 80.00th=[ 227], 90.00th=[ 269], 95.00th=[ 289], 00:13:05.562 | 99.00th=[ 383], 99.50th=[ 408], 99.90th=[ 478], 99.95th=[ 889], 00:13:05.562 | 99.99th=[ 2999] 00:13:05.562 bw ( KiB/s): min=16096, max=19464, per=35.73%, avg=18314.67, stdev=1477.14, samples=6 00:13:05.562 iops : min= 4024, max= 4866, avg=4578.67, stdev=369.29, samples=6 00:13:05.562 lat (usec) : 100=0.01%, 250=84.99%, 500=14.90%, 750=0.04%, 1000=0.01% 00:13:05.562 lat (msec) : 2=0.03%, 4=0.01% 00:13:05.562 cpu : usr=1.30%, sys=5.66%, ctx=15171, majf=0, minf=1 00:13:05.562 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:05.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.562 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.562 issued rwts: total=15161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.562 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:05.562 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70957: Fri Dec 13 09:14:59 2024 00:13:05.562 read: IOPS=4584, BW=17.9MiB/s (18.8MB/s)(70.0MiB/3909msec) 00:13:05.562 slat (usec): min=8, max=12647, avg=17.04, stdev=161.33 00:13:05.562 clat (usec): min=138, max=4291, avg=199.67, stdev=74.38 00:13:05.562 lat (usec): min=150, max=12845, avg=216.72, stdev=179.25 00:13:05.562 clat percentiles (usec): 00:13:05.562 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 176], 00:13:05.562 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:13:05.562 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 251], 95.00th=[ 277], 00:13:05.562 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 611], 99.95th=[ 1680], 00:13:05.562 | 99.99th=[ 3785] 00:13:05.562 bw ( KiB/s): min=14238, max=20024, per=35.69%, avg=18294.57, stdev=2261.79, samples=7 00:13:05.562 iops : min= 3559, max= 5006, avg=4573.57, stdev=565.60, samples=7 00:13:05.562 lat (usec) : 250=89.97%, 500=9.89%, 750=0.06%, 1000=0.02% 00:13:05.562 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01% 00:13:05.562 cpu : usr=1.20%, sys=5.89%, ctx=17939, majf=0, minf=1 00:13:05.562 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:05.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.562 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.562 issued rwts: total=17922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.562 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:05.562 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70958: Fri Dec 13 09:14:59 2024 00:13:05.562 read: IOPS=2793, BW=10.9MiB/s (11.4MB/s)(34.9MiB/3196msec) 00:13:05.562 slat (usec): min=14, max=12806, avg=23.90, stdev=158.18 00:13:05.562 clat (usec): min=169, max=1910, avg=332.09, stdev=48.12 00:13:05.562 lat (usec): min=184, max=13075, avg=356.00, stdev=164.60 00:13:05.562 clat percentiles (usec): 00:13:05.562 | 1.00th=[ 188], 5.00th=[ 225], 10.00th=[ 306], 20.00th=[ 318], 00:13:05.562 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 00:13:05.562 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 375], 00:13:05.562 | 99.00th=[ 412], 99.50th=[ 445], 99.90th=[ 627], 99.95th=[ 996], 00:13:05.562 | 99.99th=[ 1909] 00:13:05.562 bw ( KiB/s): min=10904, max=11568, per=21.63%, avg=11086.67, stdev=247.74, samples=6 00:13:05.562 iops : min= 2726, max= 2892, avg=2771.67, stdev=61.93, samples=6 00:13:05.562 lat (usec) : 250=6.06%, 500=93.56%, 750=0.30%, 1000=0.02% 00:13:05.562 lat (msec) : 2=0.04% 00:13:05.562 cpu : usr=1.35%, sys=5.35%, ctx=8930, majf=0, minf=1 00:13:05.562 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:05.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.562 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.562 issued rwts: total=8927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.562 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:05.562 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70959: Fri Dec 13 09:14:59 2024 00:13:05.562 read: IOPS=2758, BW=10.8MiB/s (11.3MB/s)(31.6MiB/2933msec) 00:13:05.562 slat (usec): min=15, max=143, avg=23.32, stdev= 5.17 00:13:05.562 clat (usec): min=194, max=3106, avg=336.99, stdev=46.47 00:13:05.562 lat (usec): min=212, max=3127, avg=360.31, stdev=46.79 00:13:05.562 clat percentiles (usec): 00:13:05.562 | 1.00th=[ 285], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 318], 00:13:05.562 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 00:13:05.562 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 375], 00:13:05.562 | 99.00th=[ 396], 99.50th=[ 416], 99.90th=[ 515], 99.95th=[ 562], 00:13:05.562 | 99.99th=[ 3097] 00:13:05.562 bw ( KiB/s): min=10864, max=11144, per=21.43%, avg=10987.20, stdev=108.16, samples=5 00:13:05.562 iops : min= 2716, max= 2786, avg=2746.80, stdev=27.04, samples=5 00:13:05.562 lat (usec) : 250=0.41%, 500=99.43%, 750=0.11% 00:13:05.562 lat (msec) : 2=0.01%, 4=0.02% 00:13:05.562 cpu : usr=1.50%, sys=5.63%, ctx=8093, majf=0, minf=2 00:13:05.562 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:05.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.562 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.562 issued rwts: total=8091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.562 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:05.562 00:13:05.562 Run status group 0 (all jobs): 00:13:05.562 READ: bw=50.1MiB/s (52.5MB/s), 10.8MiB/s-17.9MiB/s (11.3MB/s-18.8MB/s), io=196MiB (205MB), run=2933-3909msec 00:13:05.562 00:13:05.562 Disk stats (read/write): 00:13:05.562 nvme0n1: ios=14830/0, merge=0/0, ticks=3072/0, in_queue=3072, util=95.14% 00:13:05.562 nvme0n2: ios=17722/0, merge=0/0, ticks=3596/0, in_queue=3596, util=95.66% 00:13:05.562 nvme0n3: ios=8723/0, merge=0/0, ticks=2923/0, in_queue=2923, util=96.21% 00:13:05.562 nvme0n4: ios=7897/0, merge=0/0, ticks=2684/0, in_queue=2684, util=96.73% 00:13:05.821 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:05.821 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:06.080 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.080 09:14:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:06.647 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.647 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:06.906 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.906 09:15:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:07.474 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:07.474 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70911 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.733 nvmf hotplug test: fio failed as expected 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:07.733 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:07.992 rmmod nvme_tcp 00:13:07.992 rmmod nvme_fabrics 00:13:07.992 rmmod nvme_keyring 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 70517 ']' 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 70517 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 70517 ']' 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 70517 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70517 00:13:07.992 killing process with pid 70517 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70517' 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 70517 00:13:07.992 09:15:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 70517 00:13:08.969 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:08.969 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:08.969 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:08.969 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:08.969 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:08.969 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:08.969 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:09.228 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.228 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:09.228 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:09.228 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:09.228 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:09.228 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:09.229 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:09.229 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:09.229 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:09.229 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:09.229 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:09.229 09:15:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:09.229 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:09.229 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:09.229 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:09.229 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:09.229 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.229 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.229 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.229 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:13:09.229 00:13:09.229 real 0m22.218s 00:13:09.229 user 1m22.615s 00:13:09.229 sys 0m10.603s 00:13:09.229 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.229 ************************************ 00:13:09.229 END TEST nvmf_fio_target 00:13:09.229 ************************************ 00:13:09.229 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:09.488 ************************************ 00:13:09.488 START TEST nvmf_bdevio 00:13:09.488 ************************************ 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:09.488 * Looking for test storage... 00:13:09.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.488 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:09.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.747 --rc genhtml_branch_coverage=1 00:13:09.747 --rc genhtml_function_coverage=1 00:13:09.747 --rc genhtml_legend=1 00:13:09.747 --rc geninfo_all_blocks=1 00:13:09.747 --rc geninfo_unexecuted_blocks=1 00:13:09.747 00:13:09.747 ' 00:13:09.747 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.748 --rc genhtml_branch_coverage=1 00:13:09.748 --rc genhtml_function_coverage=1 00:13:09.748 --rc genhtml_legend=1 00:13:09.748 --rc geninfo_all_blocks=1 00:13:09.748 --rc geninfo_unexecuted_blocks=1 00:13:09.748 00:13:09.748 ' 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.748 --rc genhtml_branch_coverage=1 00:13:09.748 --rc genhtml_function_coverage=1 00:13:09.748 --rc genhtml_legend=1 00:13:09.748 --rc geninfo_all_blocks=1 00:13:09.748 --rc geninfo_unexecuted_blocks=1 00:13:09.748 00:13:09.748 ' 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.748 --rc genhtml_branch_coverage=1 00:13:09.748 --rc genhtml_function_coverage=1 00:13:09.748 --rc genhtml_legend=1 00:13:09.748 --rc geninfo_all_blocks=1 00:13:09.748 --rc geninfo_unexecuted_blocks=1 00:13:09.748 00:13:09.748 ' 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.748 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:09.748 Cannot find device "nvmf_init_br" 00:13:09.748 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:09.749 Cannot find device "nvmf_init_br2" 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:09.749 Cannot find device "nvmf_tgt_br" 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:09.749 Cannot find device "nvmf_tgt_br2" 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:09.749 Cannot find device "nvmf_init_br" 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:09.749 Cannot find device "nvmf_init_br2" 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:09.749 Cannot find device "nvmf_tgt_br" 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:09.749 Cannot find device "nvmf_tgt_br2" 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:09.749 Cannot find device "nvmf_br" 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:09.749 Cannot find device "nvmf_init_if" 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:09.749 Cannot find device "nvmf_init_if2" 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:09.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:09.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:09.749 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:10.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:10.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:13:10.008 00:13:10.008 --- 10.0.0.3 ping statistics --- 00:13:10.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.008 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:10.008 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:10.008 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:13:10.008 00:13:10.008 --- 10.0.0.4 ping statistics --- 00:13:10.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.008 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:10.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:10.008 00:13:10.008 --- 10.0.0.1 ping statistics --- 00:13:10.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.008 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:10.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:13:10.008 00:13:10.008 --- 10.0.0.2 ping statistics --- 00:13:10.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.008 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:13:10.008 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:10.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=71291 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 71291 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 71291 ']' 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.009 09:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:10.009 [2024-12-13 09:15:03.871322] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:10.009 [2024-12-13 09:15:03.871730] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.267 [2024-12-13 09:15:04.051696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.526 [2024-12-13 09:15:04.184510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.526 [2024-12-13 09:15:04.184977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.526 [2024-12-13 09:15:04.185666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.526 [2024-12-13 09:15:04.186325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.526 [2024-12-13 09:15:04.186694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.526 [2024-12-13 09:15:04.189192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:13:10.526 [2024-12-13 09:15:04.189384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:13:10.526 [2024-12-13 09:15:04.189439] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:13:10.526 [2024-12-13 09:15:04.189426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.526 [2024-12-13 09:15:04.378602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:11.094 [2024-12-13 09:15:04.880098] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:11.094 Malloc0 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:11.094 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.095 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:11.352 [2024-12-13 09:15:04.988550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:11.352 { 00:13:11.352 "params": { 00:13:11.352 "name": "Nvme$subsystem", 00:13:11.352 "trtype": "$TEST_TRANSPORT", 00:13:11.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.352 "adrfam": "ipv4", 00:13:11.352 "trsvcid": "$NVMF_PORT", 00:13:11.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.352 "hdgst": ${hdgst:-false}, 00:13:11.352 "ddgst": ${ddgst:-false} 00:13:11.352 }, 00:13:11.352 "method": "bdev_nvme_attach_controller" 00:13:11.352 } 00:13:11.352 EOF 00:13:11.352 )") 00:13:11.352 09:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:11.352 09:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:11.353 09:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:11.353 09:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:11.353 "params": { 00:13:11.353 "name": "Nvme1", 00:13:11.353 "trtype": "tcp", 00:13:11.353 "traddr": "10.0.0.3", 00:13:11.353 "adrfam": "ipv4", 00:13:11.353 "trsvcid": "4420", 00:13:11.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.353 "hdgst": false, 00:13:11.353 "ddgst": false 00:13:11.353 }, 00:13:11.353 "method": "bdev_nvme_attach_controller" 00:13:11.353 }' 00:13:11.353 [2024-12-13 09:15:05.102742] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:11.353 [2024-12-13 09:15:05.102895] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71327 ] 00:13:11.611 [2024-12-13 09:15:05.292565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.611 [2024-12-13 09:15:05.429464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.611 [2024-12-13 09:15:05.429590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.611 [2024-12-13 09:15:05.429602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.869 [2024-12-13 09:15:05.636383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:12.128 I/O targets: 00:13:12.128 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:12.128 00:13:12.128 00:13:12.128 CUnit - A unit testing framework for C - Version 2.1-3 00:13:12.128 http://cunit.sourceforge.net/ 00:13:12.128 00:13:12.128 00:13:12.128 Suite: bdevio tests on: Nvme1n1 00:13:12.128 Test: blockdev write read block ...passed 00:13:12.128 Test: blockdev write zeroes read block ...passed 00:13:12.128 Test: blockdev write zeroes read no split ...passed 00:13:12.128 Test: blockdev write zeroes read split ...passed 00:13:12.128 Test: blockdev write zeroes read split partial ...passed 00:13:12.128 Test: blockdev reset ...[2024-12-13 09:15:05.903521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:12.128 [2024-12-13 09:15:05.903940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:13:12.128 [2024-12-13 09:15:05.915825] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:12.128 passed 00:13:12.128 Test: blockdev write read 8 blocks ...passed 00:13:12.128 Test: blockdev write read size > 128k ...passed 00:13:12.128 Test: blockdev write read invalid size ...passed 00:13:12.128 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:12.128 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:12.128 Test: blockdev write read max offset ...passed 00:13:12.128 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:12.128 Test: blockdev writev readv 8 blocks ...passed 00:13:12.128 Test: blockdev writev readv 30 x 1block ...passed 00:13:12.128 Test: blockdev writev readv block ...passed 00:13:12.128 Test: blockdev writev readv size > 128k ...passed 00:13:12.128 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:12.128 Test: blockdev comparev and writev ...[2024-12-13 09:15:05.928157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.128 [2024-12-13 09:15:05.928219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:12.128 [2024-12-13 09:15:05.928249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.128 [2024-12-13 09:15:05.928268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:12.128 [2024-12-13 09:15:05.928939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.128 [2024-12-13 09:15:05.929140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:12.129 [2024-12-13 09:15:05.929179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.129 [2024-12-13 09:15:05.929199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:12.129 [2024-12-13 09:15:05.929577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.129 [2024-12-13 09:15:05.929613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:12.129 [2024-12-13 09:15:05.929654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.129 [2024-12-13 09:15:05.929674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:12.129 [2024-12-13 09:15:05.929991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.129 [2024-12-13 09:15:05.930024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:12.129 [2024-12-13 09:15:05.930047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.129 [2024-12-13 09:15:05.930064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:12.129 passed 00:13:12.129 Test: blockdev nvme passthru rw ...passed 00:13:12.129 Test: blockdev nvme passthru vendor specific ...[2024-12-13 09:15:05.931201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:12.129 [2024-12-13 09:15:05.931255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:12.129 passed 00:13:12.129 Test: blockdev nvme admin passthru ...[2024-12-13 09:15:05.931415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:12.129 [2024-12-13 09:15:05.931475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:12.129 [2024-12-13 09:15:05.931630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:12.129 [2024-12-13 09:15:05.931665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:12.129 [2024-12-13 09:15:05.931822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:12.129 [2024-12-13 09:15:05.931854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:12.129 passed 00:13:12.129 Test: blockdev copy ...passed 00:13:12.129 00:13:12.129 Run Summary: Type Total Ran Passed Failed Inactive 00:13:12.129 suites 1 1 n/a 0 0 00:13:12.129 tests 23 23 23 0 0 00:13:12.129 asserts 152 152 152 0 n/a 00:13:12.129 00:13:12.129 Elapsed time = 0.268 seconds 00:13:13.065 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.065 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.065 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:13.065 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.065 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:13.065 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:13.065 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.065 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:13.324 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.324 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:13.324 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.324 09:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.324 rmmod nvme_tcp 00:13:13.324 rmmod nvme_fabrics 00:13:13.324 rmmod nvme_keyring 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 71291 ']' 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 71291 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 71291 ']' 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 71291 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71291 00:13:13.324 killing process with pid 71291 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71291' 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 71291 00:13:13.324 09:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 71291 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:14.260 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:13:14.520 00:13:14.520 real 0m5.229s 00:13:14.520 user 0m19.267s 00:13:14.520 sys 0m1.065s 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.520 09:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:14.520 ************************************ 00:13:14.520 END TEST nvmf_bdevio 00:13:14.520 ************************************ 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:14.780 00:13:14.780 real 2m55.681s 00:13:14.780 user 7m50.773s 00:13:14.780 sys 0m54.181s 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:14.780 ************************************ 00:13:14.780 END TEST nvmf_target_core 00:13:14.780 ************************************ 00:13:14.780 09:15:08 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:14.780 09:15:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.780 09:15:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.780 09:15:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:14.780 ************************************ 00:13:14.780 START TEST nvmf_target_extra 00:13:14.780 ************************************ 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:14.780 * Looking for test storage... 00:13:14.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.780 --rc genhtml_branch_coverage=1 00:13:14.780 --rc genhtml_function_coverage=1 00:13:14.780 --rc genhtml_legend=1 00:13:14.780 --rc geninfo_all_blocks=1 00:13:14.780 --rc geninfo_unexecuted_blocks=1 00:13:14.780 00:13:14.780 ' 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.780 --rc genhtml_branch_coverage=1 00:13:14.780 --rc genhtml_function_coverage=1 00:13:14.780 --rc genhtml_legend=1 00:13:14.780 --rc geninfo_all_blocks=1 00:13:14.780 --rc geninfo_unexecuted_blocks=1 00:13:14.780 00:13:14.780 ' 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.780 --rc genhtml_branch_coverage=1 00:13:14.780 --rc genhtml_function_coverage=1 00:13:14.780 --rc genhtml_legend=1 00:13:14.780 --rc geninfo_all_blocks=1 00:13:14.780 --rc geninfo_unexecuted_blocks=1 00:13:14.780 00:13:14.780 ' 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.780 --rc genhtml_branch_coverage=1 00:13:14.780 --rc genhtml_function_coverage=1 00:13:14.780 --rc genhtml_legend=1 00:13:14.780 --rc geninfo_all_blocks=1 00:13:14.780 --rc geninfo_unexecuted_blocks=1 00:13:14.780 00:13:14.780 ' 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:14.780 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.041 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.041 ************************************ 00:13:15.041 START TEST nvmf_auth_target 00:13:15.041 ************************************ 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:15.041 * Looking for test storage... 00:13:15.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.041 --rc genhtml_branch_coverage=1 00:13:15.041 --rc genhtml_function_coverage=1 00:13:15.041 --rc genhtml_legend=1 00:13:15.041 --rc geninfo_all_blocks=1 00:13:15.041 --rc geninfo_unexecuted_blocks=1 00:13:15.041 00:13:15.041 ' 00:13:15.041 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.041 --rc genhtml_branch_coverage=1 00:13:15.041 --rc genhtml_function_coverage=1 00:13:15.041 --rc genhtml_legend=1 00:13:15.042 --rc geninfo_all_blocks=1 00:13:15.042 --rc geninfo_unexecuted_blocks=1 00:13:15.042 00:13:15.042 ' 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.042 --rc genhtml_branch_coverage=1 00:13:15.042 --rc genhtml_function_coverage=1 00:13:15.042 --rc genhtml_legend=1 00:13:15.042 --rc geninfo_all_blocks=1 00:13:15.042 --rc geninfo_unexecuted_blocks=1 00:13:15.042 00:13:15.042 ' 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.042 --rc genhtml_branch_coverage=1 00:13:15.042 --rc genhtml_function_coverage=1 00:13:15.042 --rc genhtml_legend=1 00:13:15.042 --rc geninfo_all_blocks=1 00:13:15.042 --rc geninfo_unexecuted_blocks=1 00:13:15.042 00:13:15.042 ' 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.042 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:15.042 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:15.302 Cannot find device "nvmf_init_br" 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:15.302 Cannot find device "nvmf_init_br2" 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:15.302 Cannot find device "nvmf_tgt_br" 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.302 Cannot find device "nvmf_tgt_br2" 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:15.302 Cannot find device "nvmf_init_br" 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:15.302 Cannot find device "nvmf_init_br2" 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:15.302 Cannot find device "nvmf_tgt_br" 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:13:15.302 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:15.302 Cannot find device "nvmf_tgt_br2" 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:15.302 Cannot find device "nvmf_br" 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:15.302 Cannot find device "nvmf_init_if" 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:15.302 Cannot find device "nvmf_init_if2" 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:15.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:15.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:15.302 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:15.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:15.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:13:15.561 00:13:15.561 --- 10.0.0.3 ping statistics --- 00:13:15.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.561 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:15.561 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:15.561 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:13:15.561 00:13:15.561 --- 10.0.0.4 ping statistics --- 00:13:15.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.561 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:15.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:15.561 00:13:15.561 --- 10.0.0.1 ping statistics --- 00:13:15.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.561 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:15.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:13:15.561 00:13:15.561 --- 10.0.0.2 ping statistics --- 00:13:15.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.561 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=71660 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 71660 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71660 ']' 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.561 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=71692 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=81c505b6b155a62b67907accf64e099ad09b596017c39133 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.GZL 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 81c505b6b155a62b67907accf64e099ad09b596017c39133 0 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 81c505b6b155a62b67907accf64e099ad09b596017c39133 0 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=81c505b6b155a62b67907accf64e099ad09b596017c39133 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.GZL 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.GZL 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.GZL 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9d8146fd3be06c59cf92e8458559e33a912116481939e18a39fbcae6b6bef460 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bhU 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9d8146fd3be06c59cf92e8458559e33a912116481939e18a39fbcae6b6bef460 3 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9d8146fd3be06c59cf92e8458559e33a912116481939e18a39fbcae6b6bef460 3 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:16.939 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9d8146fd3be06c59cf92e8458559e33a912116481939e18a39fbcae6b6bef460 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bhU 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bhU 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.bhU 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fa5a7c9b02b9e85fb0b207db945b9d84 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vcH 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fa5a7c9b02b9e85fb0b207db945b9d84 1 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fa5a7c9b02b9e85fb0b207db945b9d84 1 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fa5a7c9b02b9e85fb0b207db945b9d84 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vcH 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vcH 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.vcH 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8fabb6181f274c90f3bacb8d89da42319e1cb31e286ef6b7 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Kq3 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8fabb6181f274c90f3bacb8d89da42319e1cb31e286ef6b7 2 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8fabb6181f274c90f3bacb8d89da42319e1cb31e286ef6b7 2 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8fabb6181f274c90f3bacb8d89da42319e1cb31e286ef6b7 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Kq3 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Kq3 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Kq3 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fd9679df41457a21244c294c7d5398c1f0733e99e8ea3c7b 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fQ5 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fd9679df41457a21244c294c7d5398c1f0733e99e8ea3c7b 2 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fd9679df41457a21244c294c7d5398c1f0733e99e8ea3c7b 2 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fd9679df41457a21244c294c7d5398c1f0733e99e8ea3c7b 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fQ5 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fQ5 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.fQ5 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0037c80734697025d3d722414f72f891 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.s5z 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0037c80734697025d3d722414f72f891 1 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0037c80734697025d3d722414f72f891 1 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0037c80734697025d3d722414f72f891 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:16.940 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.s5z 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.s5z 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.s5z 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2e389056c4ba5968c32c05da2d8dae19f946d4fee15793ff3990e5aea7a03d48 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MEw 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2e389056c4ba5968c32c05da2d8dae19f946d4fee15793ff3990e5aea7a03d48 3 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2e389056c4ba5968c32c05da2d8dae19f946d4fee15793ff3990e5aea7a03d48 3 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2e389056c4ba5968c32c05da2d8dae19f946d4fee15793ff3990e5aea7a03d48 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MEw 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MEw 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.MEw 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:17.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 71660 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71660 ']' 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.199 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:17.458 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.458 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:17.458 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 71692 /var/tmp/host.sock 00:13:17.458 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71692 ']' 00:13:17.458 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:17.458 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.458 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:17.458 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.458 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GZL 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.GZL 00:13:18.027 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.GZL 00:13:18.286 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.bhU ]] 00:13:18.286 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bhU 00:13:18.286 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.286 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.286 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.286 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bhU 00:13:18.286 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bhU 00:13:18.546 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:18.546 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vcH 00:13:18.546 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.546 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.546 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.546 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vcH 00:13:18.546 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vcH 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Kq3 ]] 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kq3 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kq3 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kq3 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fQ5 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.805 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.064 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.064 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.fQ5 00:13:19.064 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.fQ5 00:13:19.323 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.s5z ]] 00:13:19.323 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s5z 00:13:19.323 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.323 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.323 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.323 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s5z 00:13:19.323 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s5z 00:13:19.582 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:19.582 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.MEw 00:13:19.582 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.582 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.582 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.582 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.MEw 00:13:19.582 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.MEw 00:13:19.854 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:19.854 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:19.854 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:19.854 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.854 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:19.854 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.127 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.386 00:13:20.387 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.387 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.387 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.646 { 00:13:20.646 "cntlid": 1, 00:13:20.646 "qid": 0, 00:13:20.646 "state": "enabled", 00:13:20.646 "thread": "nvmf_tgt_poll_group_000", 00:13:20.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:20.646 "listen_address": { 00:13:20.646 "trtype": "TCP", 00:13:20.646 "adrfam": "IPv4", 00:13:20.646 "traddr": "10.0.0.3", 00:13:20.646 "trsvcid": "4420" 00:13:20.646 }, 00:13:20.646 "peer_address": { 00:13:20.646 "trtype": "TCP", 00:13:20.646 "adrfam": "IPv4", 00:13:20.646 "traddr": "10.0.0.1", 00:13:20.646 "trsvcid": "51344" 00:13:20.646 }, 00:13:20.646 "auth": { 00:13:20.646 "state": "completed", 00:13:20.646 "digest": "sha256", 00:13:20.646 "dhgroup": "null" 00:13:20.646 } 00:13:20.646 } 00:13:20.646 ]' 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:20.646 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.906 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.906 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.906 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.165 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:13:21.165 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:13:25.355 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.355 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:25.355 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.355 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.355 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.355 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.355 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:25.355 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.355 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.615 00:13:25.615 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.615 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.615 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.183 { 00:13:26.183 "cntlid": 3, 00:13:26.183 "qid": 0, 00:13:26.183 "state": "enabled", 00:13:26.183 "thread": "nvmf_tgt_poll_group_000", 00:13:26.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:26.183 "listen_address": { 00:13:26.183 "trtype": "TCP", 00:13:26.183 "adrfam": "IPv4", 00:13:26.183 "traddr": "10.0.0.3", 00:13:26.183 "trsvcid": "4420" 00:13:26.183 }, 00:13:26.183 "peer_address": { 00:13:26.183 "trtype": "TCP", 00:13:26.183 "adrfam": "IPv4", 00:13:26.183 "traddr": "10.0.0.1", 00:13:26.183 "trsvcid": "51384" 00:13:26.183 }, 00:13:26.183 "auth": { 00:13:26.183 "state": "completed", 00:13:26.183 "digest": "sha256", 00:13:26.183 "dhgroup": "null" 00:13:26.183 } 00:13:26.183 } 00:13:26.183 ]' 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.183 09:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.442 09:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:13:26.442 09:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:13:27.010 09:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.010 09:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:27.010 09:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.010 09:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.010 09:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.010 09:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.010 09:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:27.010 09:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.270 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.838 00:13:27.838 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.838 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.838 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.097 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.097 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.097 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.097 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.097 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.097 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.097 { 00:13:28.097 "cntlid": 5, 00:13:28.097 "qid": 0, 00:13:28.097 "state": "enabled", 00:13:28.097 "thread": "nvmf_tgt_poll_group_000", 00:13:28.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:28.097 "listen_address": { 00:13:28.097 "trtype": "TCP", 00:13:28.097 "adrfam": "IPv4", 00:13:28.097 "traddr": "10.0.0.3", 00:13:28.097 "trsvcid": "4420" 00:13:28.097 }, 00:13:28.097 "peer_address": { 00:13:28.097 "trtype": "TCP", 00:13:28.097 "adrfam": "IPv4", 00:13:28.097 "traddr": "10.0.0.1", 00:13:28.097 "trsvcid": "51398" 00:13:28.097 }, 00:13:28.097 "auth": { 00:13:28.097 "state": "completed", 00:13:28.097 "digest": "sha256", 00:13:28.097 "dhgroup": "null" 00:13:28.097 } 00:13:28.097 } 00:13:28.097 ]' 00:13:28.097 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.097 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:28.097 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.097 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:28.098 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.098 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.098 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.098 09:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.357 09:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:13:28.357 09:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:13:28.924 09:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.924 09:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:28.924 09:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.924 09:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.183 09:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.183 09:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.183 09:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:29.183 09:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.442 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.701 00:13:29.701 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.701 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.701 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.960 { 00:13:29.960 "cntlid": 7, 00:13:29.960 "qid": 0, 00:13:29.960 "state": "enabled", 00:13:29.960 "thread": "nvmf_tgt_poll_group_000", 00:13:29.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:29.960 "listen_address": { 00:13:29.960 "trtype": "TCP", 00:13:29.960 "adrfam": "IPv4", 00:13:29.960 "traddr": "10.0.0.3", 00:13:29.960 "trsvcid": "4420" 00:13:29.960 }, 00:13:29.960 "peer_address": { 00:13:29.960 "trtype": "TCP", 00:13:29.960 "adrfam": "IPv4", 00:13:29.960 "traddr": "10.0.0.1", 00:13:29.960 "trsvcid": "51414" 00:13:29.960 }, 00:13:29.960 "auth": { 00:13:29.960 "state": "completed", 00:13:29.960 "digest": "sha256", 00:13:29.960 "dhgroup": "null" 00:13:29.960 } 00:13:29.960 } 00:13:29.960 ]' 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.960 09:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.219 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:13:30.219 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:13:30.787 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.787 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:30.787 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.787 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.045 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.045 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:31.045 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.045 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:31.045 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:31.308 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:31.308 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.308 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:31.308 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:31.308 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:31.308 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.308 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.308 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.308 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.308 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.308 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.308 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.308 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.567 00:13:31.567 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.567 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.567 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.826 { 00:13:31.826 "cntlid": 9, 00:13:31.826 "qid": 0, 00:13:31.826 "state": "enabled", 00:13:31.826 "thread": "nvmf_tgt_poll_group_000", 00:13:31.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:31.826 "listen_address": { 00:13:31.826 "trtype": "TCP", 00:13:31.826 "adrfam": "IPv4", 00:13:31.826 "traddr": "10.0.0.3", 00:13:31.826 "trsvcid": "4420" 00:13:31.826 }, 00:13:31.826 "peer_address": { 00:13:31.826 "trtype": "TCP", 00:13:31.826 "adrfam": "IPv4", 00:13:31.826 "traddr": "10.0.0.1", 00:13:31.826 "trsvcid": "49662" 00:13:31.826 }, 00:13:31.826 "auth": { 00:13:31.826 "state": "completed", 00:13:31.826 "digest": "sha256", 00:13:31.826 "dhgroup": "ffdhe2048" 00:13:31.826 } 00:13:31.826 } 00:13:31.826 ]' 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.826 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.087 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.087 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.087 09:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.346 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:13:32.346 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:13:32.914 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.914 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:32.914 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.915 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.915 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.915 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.915 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:32.915 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.174 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.433 00:13:33.433 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.433 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.433 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.692 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.692 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.692 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.692 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.692 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.692 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.692 { 00:13:33.692 "cntlid": 11, 00:13:33.692 "qid": 0, 00:13:33.692 "state": "enabled", 00:13:33.692 "thread": "nvmf_tgt_poll_group_000", 00:13:33.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:33.692 "listen_address": { 00:13:33.692 "trtype": "TCP", 00:13:33.692 "adrfam": "IPv4", 00:13:33.692 "traddr": "10.0.0.3", 00:13:33.692 "trsvcid": "4420" 00:13:33.692 }, 00:13:33.692 "peer_address": { 00:13:33.692 "trtype": "TCP", 00:13:33.692 "adrfam": "IPv4", 00:13:33.692 "traddr": "10.0.0.1", 00:13:33.692 "trsvcid": "49680" 00:13:33.692 }, 00:13:33.692 "auth": { 00:13:33.692 "state": "completed", 00:13:33.692 "digest": "sha256", 00:13:33.692 "dhgroup": "ffdhe2048" 00:13:33.692 } 00:13:33.692 } 00:13:33.692 ]' 00:13:33.692 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.692 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:33.692 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.952 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:33.952 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.952 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.952 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.952 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.211 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:13:34.211 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:13:34.779 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.779 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:34.779 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.779 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.779 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.779 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.779 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:34.779 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.347 09:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.347 00:13:35.606 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.606 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.606 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.866 { 00:13:35.866 "cntlid": 13, 00:13:35.866 "qid": 0, 00:13:35.866 "state": "enabled", 00:13:35.866 "thread": "nvmf_tgt_poll_group_000", 00:13:35.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:35.866 "listen_address": { 00:13:35.866 "trtype": "TCP", 00:13:35.866 "adrfam": "IPv4", 00:13:35.866 "traddr": "10.0.0.3", 00:13:35.866 "trsvcid": "4420" 00:13:35.866 }, 00:13:35.866 "peer_address": { 00:13:35.866 "trtype": "TCP", 00:13:35.866 "adrfam": "IPv4", 00:13:35.866 "traddr": "10.0.0.1", 00:13:35.866 "trsvcid": "49690" 00:13:35.866 }, 00:13:35.866 "auth": { 00:13:35.866 "state": "completed", 00:13:35.866 "digest": "sha256", 00:13:35.866 "dhgroup": "ffdhe2048" 00:13:35.866 } 00:13:35.866 } 00:13:35.866 ]' 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.866 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.125 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:13:36.125 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:13:36.693 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.693 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:36.693 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.693 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.693 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.694 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.694 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:36.694 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.262 09:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.521 00:13:37.521 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.521 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.521 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.780 { 00:13:37.780 "cntlid": 15, 00:13:37.780 "qid": 0, 00:13:37.780 "state": "enabled", 00:13:37.780 "thread": "nvmf_tgt_poll_group_000", 00:13:37.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:37.780 "listen_address": { 00:13:37.780 "trtype": "TCP", 00:13:37.780 "adrfam": "IPv4", 00:13:37.780 "traddr": "10.0.0.3", 00:13:37.780 "trsvcid": "4420" 00:13:37.780 }, 00:13:37.780 "peer_address": { 00:13:37.780 "trtype": "TCP", 00:13:37.780 "adrfam": "IPv4", 00:13:37.780 "traddr": "10.0.0.1", 00:13:37.780 "trsvcid": "49722" 00:13:37.780 }, 00:13:37.780 "auth": { 00:13:37.780 "state": "completed", 00:13:37.780 "digest": "sha256", 00:13:37.780 "dhgroup": "ffdhe2048" 00:13:37.780 } 00:13:37.780 } 00:13:37.780 ]' 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:37.780 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.039 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.039 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.039 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.298 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:13:38.298 09:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:13:38.866 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.866 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:38.866 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.866 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.866 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.866 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.866 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.866 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:38.867 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.126 09:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.386 00:13:39.386 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.386 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.386 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.645 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.645 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.645 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.645 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.645 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.645 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.645 { 00:13:39.645 "cntlid": 17, 00:13:39.645 "qid": 0, 00:13:39.645 "state": "enabled", 00:13:39.645 "thread": "nvmf_tgt_poll_group_000", 00:13:39.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:39.645 "listen_address": { 00:13:39.645 "trtype": "TCP", 00:13:39.645 "adrfam": "IPv4", 00:13:39.645 "traddr": "10.0.0.3", 00:13:39.645 "trsvcid": "4420" 00:13:39.645 }, 00:13:39.645 "peer_address": { 00:13:39.645 "trtype": "TCP", 00:13:39.645 "adrfam": "IPv4", 00:13:39.645 "traddr": "10.0.0.1", 00:13:39.645 "trsvcid": "49748" 00:13:39.645 }, 00:13:39.645 "auth": { 00:13:39.645 "state": "completed", 00:13:39.645 "digest": "sha256", 00:13:39.645 "dhgroup": "ffdhe3072" 00:13:39.645 } 00:13:39.645 } 00:13:39.645 ]' 00:13:39.645 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.904 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.904 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.904 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.904 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.904 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.904 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.904 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.164 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:13:40.164 09:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.102 09:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.708 00:13:41.708 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.708 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.708 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.967 { 00:13:41.967 "cntlid": 19, 00:13:41.967 "qid": 0, 00:13:41.967 "state": "enabled", 00:13:41.967 "thread": "nvmf_tgt_poll_group_000", 00:13:41.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:41.967 "listen_address": { 00:13:41.967 "trtype": "TCP", 00:13:41.967 "adrfam": "IPv4", 00:13:41.967 "traddr": "10.0.0.3", 00:13:41.967 "trsvcid": "4420" 00:13:41.967 }, 00:13:41.967 "peer_address": { 00:13:41.967 "trtype": "TCP", 00:13:41.967 "adrfam": "IPv4", 00:13:41.967 "traddr": "10.0.0.1", 00:13:41.967 "trsvcid": "44570" 00:13:41.967 }, 00:13:41.967 "auth": { 00:13:41.967 "state": "completed", 00:13:41.967 "digest": "sha256", 00:13:41.967 "dhgroup": "ffdhe3072" 00:13:41.967 } 00:13:41.967 } 00:13:41.967 ]' 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.967 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.535 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:13:42.535 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:13:43.103 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.103 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:43.103 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.103 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.103 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.103 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.103 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:43.103 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.363 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.931 00:13:43.931 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.931 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.931 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.931 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.931 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.931 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.931 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.931 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.931 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.931 { 00:13:43.931 "cntlid": 21, 00:13:43.931 "qid": 0, 00:13:43.931 "state": "enabled", 00:13:43.931 "thread": "nvmf_tgt_poll_group_000", 00:13:43.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:43.931 "listen_address": { 00:13:43.931 "trtype": "TCP", 00:13:43.931 "adrfam": "IPv4", 00:13:43.931 "traddr": "10.0.0.3", 00:13:43.931 "trsvcid": "4420" 00:13:43.931 }, 00:13:43.931 "peer_address": { 00:13:43.931 "trtype": "TCP", 00:13:43.931 "adrfam": "IPv4", 00:13:43.931 "traddr": "10.0.0.1", 00:13:43.931 "trsvcid": "44590" 00:13:43.931 }, 00:13:43.931 "auth": { 00:13:43.931 "state": "completed", 00:13:43.931 "digest": "sha256", 00:13:43.931 "dhgroup": "ffdhe3072" 00:13:43.931 } 00:13:43.931 } 00:13:43.931 ]' 00:13:43.931 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.190 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:44.190 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.190 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:44.190 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.190 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.190 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.190 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.449 09:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:13:44.449 09:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:13:45.017 09:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.275 09:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:45.275 09:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.275 09:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.275 09:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.275 09:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.275 09:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:45.275 09:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.534 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:45.535 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.535 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.793 00:13:45.793 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.793 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.793 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.053 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.053 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.053 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.053 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.053 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.053 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.053 { 00:13:46.053 "cntlid": 23, 00:13:46.053 "qid": 0, 00:13:46.053 "state": "enabled", 00:13:46.053 "thread": "nvmf_tgt_poll_group_000", 00:13:46.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:46.053 "listen_address": { 00:13:46.053 "trtype": "TCP", 00:13:46.053 "adrfam": "IPv4", 00:13:46.053 "traddr": "10.0.0.3", 00:13:46.053 "trsvcid": "4420" 00:13:46.053 }, 00:13:46.053 "peer_address": { 00:13:46.053 "trtype": "TCP", 00:13:46.053 "adrfam": "IPv4", 00:13:46.053 "traddr": "10.0.0.1", 00:13:46.053 "trsvcid": "44610" 00:13:46.053 }, 00:13:46.053 "auth": { 00:13:46.053 "state": "completed", 00:13:46.053 "digest": "sha256", 00:13:46.053 "dhgroup": "ffdhe3072" 00:13:46.053 } 00:13:46.053 } 00:13:46.053 ]' 00:13:46.053 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.053 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.053 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.312 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:46.312 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.312 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.312 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.312 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.570 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:13:46.570 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:13:47.138 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.138 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:47.138 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.138 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.138 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.138 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.138 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.138 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:47.138 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:47.397 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:47.397 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.397 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:47.397 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:47.397 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:47.397 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.397 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.397 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.397 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.655 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.655 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.655 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.655 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.913 00:13:47.913 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.913 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.913 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.172 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.172 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.172 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.172 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.172 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.172 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.172 { 00:13:48.172 "cntlid": 25, 00:13:48.172 "qid": 0, 00:13:48.172 "state": "enabled", 00:13:48.172 "thread": "nvmf_tgt_poll_group_000", 00:13:48.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:48.172 "listen_address": { 00:13:48.172 "trtype": "TCP", 00:13:48.172 "adrfam": "IPv4", 00:13:48.172 "traddr": "10.0.0.3", 00:13:48.172 "trsvcid": "4420" 00:13:48.172 }, 00:13:48.172 "peer_address": { 00:13:48.172 "trtype": "TCP", 00:13:48.172 "adrfam": "IPv4", 00:13:48.172 "traddr": "10.0.0.1", 00:13:48.172 "trsvcid": "44624" 00:13:48.172 }, 00:13:48.172 "auth": { 00:13:48.172 "state": "completed", 00:13:48.172 "digest": "sha256", 00:13:48.172 "dhgroup": "ffdhe4096" 00:13:48.172 } 00:13:48.172 } 00:13:48.172 ]' 00:13:48.172 09:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.172 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.172 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.432 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:48.432 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.432 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.432 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.432 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.691 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:13:48.691 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:13:49.258 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.258 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:49.258 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.258 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.258 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.258 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.258 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:49.258 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.518 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.085 00:13:50.085 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.085 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.085 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.345 { 00:13:50.345 "cntlid": 27, 00:13:50.345 "qid": 0, 00:13:50.345 "state": "enabled", 00:13:50.345 "thread": "nvmf_tgt_poll_group_000", 00:13:50.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:50.345 "listen_address": { 00:13:50.345 "trtype": "TCP", 00:13:50.345 "adrfam": "IPv4", 00:13:50.345 "traddr": "10.0.0.3", 00:13:50.345 "trsvcid": "4420" 00:13:50.345 }, 00:13:50.345 "peer_address": { 00:13:50.345 "trtype": "TCP", 00:13:50.345 "adrfam": "IPv4", 00:13:50.345 "traddr": "10.0.0.1", 00:13:50.345 "trsvcid": "39436" 00:13:50.345 }, 00:13:50.345 "auth": { 00:13:50.345 "state": "completed", 00:13:50.345 "digest": "sha256", 00:13:50.345 "dhgroup": "ffdhe4096" 00:13:50.345 } 00:13:50.345 } 00:13:50.345 ]' 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.345 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.913 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:13:50.913 09:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:13:51.482 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.482 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:51.482 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.482 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.482 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.482 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.482 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:51.482 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.783 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.042 00:13:52.042 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.042 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.042 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.301 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.301 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.301 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.301 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.301 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.301 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.301 { 00:13:52.301 "cntlid": 29, 00:13:52.301 "qid": 0, 00:13:52.301 "state": "enabled", 00:13:52.301 "thread": "nvmf_tgt_poll_group_000", 00:13:52.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:52.301 "listen_address": { 00:13:52.301 "trtype": "TCP", 00:13:52.301 "adrfam": "IPv4", 00:13:52.301 "traddr": "10.0.0.3", 00:13:52.301 "trsvcid": "4420" 00:13:52.301 }, 00:13:52.301 "peer_address": { 00:13:52.301 "trtype": "TCP", 00:13:52.301 "adrfam": "IPv4", 00:13:52.301 "traddr": "10.0.0.1", 00:13:52.301 "trsvcid": "39458" 00:13:52.301 }, 00:13:52.301 "auth": { 00:13:52.301 "state": "completed", 00:13:52.301 "digest": "sha256", 00:13:52.301 "dhgroup": "ffdhe4096" 00:13:52.301 } 00:13:52.301 } 00:13:52.301 ]' 00:13:52.301 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.559 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.559 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.559 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:52.559 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.559 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.559 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.559 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.818 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:13:52.818 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:13:53.387 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.387 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:53.387 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.387 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.387 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.387 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.387 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:53.387 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:53.648 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:53.648 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.648 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:53.648 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:53.648 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:53.648 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.648 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:13:53.648 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.648 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.907 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.907 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:53.907 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.907 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.166 00:13:54.166 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.166 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.166 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.425 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.425 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.425 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.425 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.425 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.425 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.425 { 00:13:54.425 "cntlid": 31, 00:13:54.425 "qid": 0, 00:13:54.425 "state": "enabled", 00:13:54.425 "thread": "nvmf_tgt_poll_group_000", 00:13:54.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:54.425 "listen_address": { 00:13:54.425 "trtype": "TCP", 00:13:54.425 "adrfam": "IPv4", 00:13:54.425 "traddr": "10.0.0.3", 00:13:54.425 "trsvcid": "4420" 00:13:54.425 }, 00:13:54.425 "peer_address": { 00:13:54.425 "trtype": "TCP", 00:13:54.425 "adrfam": "IPv4", 00:13:54.425 "traddr": "10.0.0.1", 00:13:54.425 "trsvcid": "39490" 00:13:54.425 }, 00:13:54.425 "auth": { 00:13:54.425 "state": "completed", 00:13:54.425 "digest": "sha256", 00:13:54.425 "dhgroup": "ffdhe4096" 00:13:54.425 } 00:13:54.425 } 00:13:54.425 ]' 00:13:54.425 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.425 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:54.425 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.684 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:54.684 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.684 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.684 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.684 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.943 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:13:54.943 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:13:55.512 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.512 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:55.512 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.512 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.512 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.512 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.512 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.512 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:55.512 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.080 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.339 00:13:56.339 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.339 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.339 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.598 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.598 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.598 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.598 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.598 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.598 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.598 { 00:13:56.598 "cntlid": 33, 00:13:56.598 "qid": 0, 00:13:56.598 "state": "enabled", 00:13:56.598 "thread": "nvmf_tgt_poll_group_000", 00:13:56.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:56.598 "listen_address": { 00:13:56.598 "trtype": "TCP", 00:13:56.598 "adrfam": "IPv4", 00:13:56.598 "traddr": "10.0.0.3", 00:13:56.598 "trsvcid": "4420" 00:13:56.598 }, 00:13:56.598 "peer_address": { 00:13:56.598 "trtype": "TCP", 00:13:56.598 "adrfam": "IPv4", 00:13:56.598 "traddr": "10.0.0.1", 00:13:56.598 "trsvcid": "39514" 00:13:56.598 }, 00:13:56.598 "auth": { 00:13:56.598 "state": "completed", 00:13:56.598 "digest": "sha256", 00:13:56.598 "dhgroup": "ffdhe6144" 00:13:56.598 } 00:13:56.598 } 00:13:56.598 ]' 00:13:56.598 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.857 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.857 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.857 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:56.857 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.857 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.857 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.857 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.116 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:13:57.116 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:13:57.684 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.684 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:57.684 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.684 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.684 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.684 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.684 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:57.684 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.943 09:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.511 00:13:58.511 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.511 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.511 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.770 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.770 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.770 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.770 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.770 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.770 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.770 { 00:13:58.770 "cntlid": 35, 00:13:58.770 "qid": 0, 00:13:58.770 "state": "enabled", 00:13:58.770 "thread": "nvmf_tgt_poll_group_000", 00:13:58.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:13:58.770 "listen_address": { 00:13:58.770 "trtype": "TCP", 00:13:58.770 "adrfam": "IPv4", 00:13:58.770 "traddr": "10.0.0.3", 00:13:58.770 "trsvcid": "4420" 00:13:58.770 }, 00:13:58.770 "peer_address": { 00:13:58.770 "trtype": "TCP", 00:13:58.770 "adrfam": "IPv4", 00:13:58.770 "traddr": "10.0.0.1", 00:13:58.770 "trsvcid": "39540" 00:13:58.770 }, 00:13:58.770 "auth": { 00:13:58.770 "state": "completed", 00:13:58.770 "digest": "sha256", 00:13:58.770 "dhgroup": "ffdhe6144" 00:13:58.770 } 00:13:58.770 } 00:13:58.770 ]' 00:13:58.770 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.770 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.770 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.028 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:59.028 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.029 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.029 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.029 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.287 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:13:59.287 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:13:59.855 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.855 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:13:59.855 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.855 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.114 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.114 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.114 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:00.114 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.373 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.631 00:14:00.890 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.890 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.890 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.178 { 00:14:01.178 "cntlid": 37, 00:14:01.178 "qid": 0, 00:14:01.178 "state": "enabled", 00:14:01.178 "thread": "nvmf_tgt_poll_group_000", 00:14:01.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:01.178 "listen_address": { 00:14:01.178 "trtype": "TCP", 00:14:01.178 "adrfam": "IPv4", 00:14:01.178 "traddr": "10.0.0.3", 00:14:01.178 "trsvcid": "4420" 00:14:01.178 }, 00:14:01.178 "peer_address": { 00:14:01.178 "trtype": "TCP", 00:14:01.178 "adrfam": "IPv4", 00:14:01.178 "traddr": "10.0.0.1", 00:14:01.178 "trsvcid": "33432" 00:14:01.178 }, 00:14:01.178 "auth": { 00:14:01.178 "state": "completed", 00:14:01.178 "digest": "sha256", 00:14:01.178 "dhgroup": "ffdhe6144" 00:14:01.178 } 00:14:01.178 } 00:14:01.178 ]' 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.178 09:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.445 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:01.445 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:02.011 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.011 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:02.011 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.011 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.011 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.011 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.011 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:02.011 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:02.270 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:02.270 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.270 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:02.271 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:02.271 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:02.271 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.271 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:14:02.271 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.271 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.271 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.271 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:02.271 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.271 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.838 00:14:02.838 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.838 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.838 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.097 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.097 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.097 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.097 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.097 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.097 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.097 { 00:14:03.097 "cntlid": 39, 00:14:03.097 "qid": 0, 00:14:03.097 "state": "enabled", 00:14:03.097 "thread": "nvmf_tgt_poll_group_000", 00:14:03.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:03.097 "listen_address": { 00:14:03.097 "trtype": "TCP", 00:14:03.097 "adrfam": "IPv4", 00:14:03.097 "traddr": "10.0.0.3", 00:14:03.097 "trsvcid": "4420" 00:14:03.097 }, 00:14:03.097 "peer_address": { 00:14:03.097 "trtype": "TCP", 00:14:03.097 "adrfam": "IPv4", 00:14:03.097 "traddr": "10.0.0.1", 00:14:03.097 "trsvcid": "33464" 00:14:03.097 }, 00:14:03.097 "auth": { 00:14:03.097 "state": "completed", 00:14:03.097 "digest": "sha256", 00:14:03.097 "dhgroup": "ffdhe6144" 00:14:03.097 } 00:14:03.097 } 00:14:03.097 ]' 00:14:03.097 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.097 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.097 09:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.356 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:03.356 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.356 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.356 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.356 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.615 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:03.615 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:04.183 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.183 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:04.183 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.183 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.183 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.183 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.183 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.183 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.183 09:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.442 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.010 00:14:05.010 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.010 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.010 09:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.269 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.269 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.269 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.269 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.269 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.269 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.269 { 00:14:05.269 "cntlid": 41, 00:14:05.269 "qid": 0, 00:14:05.269 "state": "enabled", 00:14:05.269 "thread": "nvmf_tgt_poll_group_000", 00:14:05.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:05.269 "listen_address": { 00:14:05.269 "trtype": "TCP", 00:14:05.269 "adrfam": "IPv4", 00:14:05.269 "traddr": "10.0.0.3", 00:14:05.269 "trsvcid": "4420" 00:14:05.269 }, 00:14:05.269 "peer_address": { 00:14:05.269 "trtype": "TCP", 00:14:05.269 "adrfam": "IPv4", 00:14:05.270 "traddr": "10.0.0.1", 00:14:05.270 "trsvcid": "33496" 00:14:05.270 }, 00:14:05.270 "auth": { 00:14:05.270 "state": "completed", 00:14:05.270 "digest": "sha256", 00:14:05.270 "dhgroup": "ffdhe8192" 00:14:05.270 } 00:14:05.270 } 00:14:05.270 ]' 00:14:05.270 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.270 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.270 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.529 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:05.529 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.529 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.529 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.529 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.788 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:05.788 09:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:06.356 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.356 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:06.356 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.356 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.356 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.356 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.356 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:06.356 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:06.614 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.615 09:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.183 00:14:07.183 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.183 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.183 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.443 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.443 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.443 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.443 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.702 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.702 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.702 { 00:14:07.702 "cntlid": 43, 00:14:07.702 "qid": 0, 00:14:07.702 "state": "enabled", 00:14:07.702 "thread": "nvmf_tgt_poll_group_000", 00:14:07.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:07.702 "listen_address": { 00:14:07.702 "trtype": "TCP", 00:14:07.702 "adrfam": "IPv4", 00:14:07.702 "traddr": "10.0.0.3", 00:14:07.702 "trsvcid": "4420" 00:14:07.702 }, 00:14:07.702 "peer_address": { 00:14:07.702 "trtype": "TCP", 00:14:07.702 "adrfam": "IPv4", 00:14:07.702 "traddr": "10.0.0.1", 00:14:07.702 "trsvcid": "33530" 00:14:07.702 }, 00:14:07.702 "auth": { 00:14:07.702 "state": "completed", 00:14:07.702 "digest": "sha256", 00:14:07.702 "dhgroup": "ffdhe8192" 00:14:07.702 } 00:14:07.702 } 00:14:07.702 ]' 00:14:07.702 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.702 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.702 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.702 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:07.702 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.702 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.702 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.702 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.961 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:07.961 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:08.528 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.528 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:08.528 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.528 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.528 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.528 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.528 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:08.528 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:09.095 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:09.095 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.095 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:09.095 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:09.095 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:09.095 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.096 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.096 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.096 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.096 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.096 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.096 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.096 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.664 00:14:09.664 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.664 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.664 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.923 { 00:14:09.923 "cntlid": 45, 00:14:09.923 "qid": 0, 00:14:09.923 "state": "enabled", 00:14:09.923 "thread": "nvmf_tgt_poll_group_000", 00:14:09.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:09.923 "listen_address": { 00:14:09.923 "trtype": "TCP", 00:14:09.923 "adrfam": "IPv4", 00:14:09.923 "traddr": "10.0.0.3", 00:14:09.923 "trsvcid": "4420" 00:14:09.923 }, 00:14:09.923 "peer_address": { 00:14:09.923 "trtype": "TCP", 00:14:09.923 "adrfam": "IPv4", 00:14:09.923 "traddr": "10.0.0.1", 00:14:09.923 "trsvcid": "33558" 00:14:09.923 }, 00:14:09.923 "auth": { 00:14:09.923 "state": "completed", 00:14:09.923 "digest": "sha256", 00:14:09.923 "dhgroup": "ffdhe8192" 00:14:09.923 } 00:14:09.923 } 00:14:09.923 ]' 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.923 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.182 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:10.182 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.119 09:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.746 00:14:11.746 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.746 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.746 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.005 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.263 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.263 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.263 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.263 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.263 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.263 { 00:14:12.263 "cntlid": 47, 00:14:12.263 "qid": 0, 00:14:12.263 "state": "enabled", 00:14:12.263 "thread": "nvmf_tgt_poll_group_000", 00:14:12.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:12.263 "listen_address": { 00:14:12.263 "trtype": "TCP", 00:14:12.263 "adrfam": "IPv4", 00:14:12.263 "traddr": "10.0.0.3", 00:14:12.263 "trsvcid": "4420" 00:14:12.263 }, 00:14:12.263 "peer_address": { 00:14:12.263 "trtype": "TCP", 00:14:12.263 "adrfam": "IPv4", 00:14:12.263 "traddr": "10.0.0.1", 00:14:12.263 "trsvcid": "44066" 00:14:12.263 }, 00:14:12.263 "auth": { 00:14:12.263 "state": "completed", 00:14:12.263 "digest": "sha256", 00:14:12.263 "dhgroup": "ffdhe8192" 00:14:12.263 } 00:14:12.263 } 00:14:12.263 ]' 00:14:12.263 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.263 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.263 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.263 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:12.263 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.263 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.263 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.263 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.522 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:12.522 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:13.090 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.349 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:13.349 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.349 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.349 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.349 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:13.349 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:13.349 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.349 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:13.349 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:13.608 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:13.608 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.608 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:13.608 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:13.608 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:13.608 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.608 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.608 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.609 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.609 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.609 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.609 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.609 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.867 00:14:13.867 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.867 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.867 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.126 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.126 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.126 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.126 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.126 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.126 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.126 { 00:14:14.126 "cntlid": 49, 00:14:14.126 "qid": 0, 00:14:14.126 "state": "enabled", 00:14:14.126 "thread": "nvmf_tgt_poll_group_000", 00:14:14.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:14.126 "listen_address": { 00:14:14.126 "trtype": "TCP", 00:14:14.126 "adrfam": "IPv4", 00:14:14.126 "traddr": "10.0.0.3", 00:14:14.126 "trsvcid": "4420" 00:14:14.126 }, 00:14:14.126 "peer_address": { 00:14:14.126 "trtype": "TCP", 00:14:14.126 "adrfam": "IPv4", 00:14:14.126 "traddr": "10.0.0.1", 00:14:14.126 "trsvcid": "44094" 00:14:14.126 }, 00:14:14.126 "auth": { 00:14:14.126 "state": "completed", 00:14:14.126 "digest": "sha384", 00:14:14.126 "dhgroup": "null" 00:14:14.126 } 00:14:14.126 } 00:14:14.126 ]' 00:14:14.126 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.127 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.127 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.127 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:14.127 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.127 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.127 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.127 09:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.694 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:14.694 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:15.262 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.262 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:15.262 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.262 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.263 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.263 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.263 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:15.263 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.521 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.781 00:14:15.781 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.781 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.781 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.040 { 00:14:16.040 "cntlid": 51, 00:14:16.040 "qid": 0, 00:14:16.040 "state": "enabled", 00:14:16.040 "thread": "nvmf_tgt_poll_group_000", 00:14:16.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:16.040 "listen_address": { 00:14:16.040 "trtype": "TCP", 00:14:16.040 "adrfam": "IPv4", 00:14:16.040 "traddr": "10.0.0.3", 00:14:16.040 "trsvcid": "4420" 00:14:16.040 }, 00:14:16.040 "peer_address": { 00:14:16.040 "trtype": "TCP", 00:14:16.040 "adrfam": "IPv4", 00:14:16.040 "traddr": "10.0.0.1", 00:14:16.040 "trsvcid": "44106" 00:14:16.040 }, 00:14:16.040 "auth": { 00:14:16.040 "state": "completed", 00:14:16.040 "digest": "sha384", 00:14:16.040 "dhgroup": "null" 00:14:16.040 } 00:14:16.040 } 00:14:16.040 ]' 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:16.040 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.299 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.299 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.299 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.557 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:16.557 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:17.125 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.125 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:17.125 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.125 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.125 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.125 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.125 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:17.126 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.385 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.644 00:14:17.644 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.644 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.644 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.903 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.903 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.903 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.903 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.903 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.903 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.903 { 00:14:17.903 "cntlid": 53, 00:14:17.903 "qid": 0, 00:14:17.903 "state": "enabled", 00:14:17.903 "thread": "nvmf_tgt_poll_group_000", 00:14:17.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:17.903 "listen_address": { 00:14:17.903 "trtype": "TCP", 00:14:17.903 "adrfam": "IPv4", 00:14:17.903 "traddr": "10.0.0.3", 00:14:17.903 "trsvcid": "4420" 00:14:17.903 }, 00:14:17.903 "peer_address": { 00:14:17.903 "trtype": "TCP", 00:14:17.903 "adrfam": "IPv4", 00:14:17.903 "traddr": "10.0.0.1", 00:14:17.903 "trsvcid": "44138" 00:14:17.903 }, 00:14:17.903 "auth": { 00:14:17.903 "state": "completed", 00:14:17.903 "digest": "sha384", 00:14:17.903 "dhgroup": "null" 00:14:17.903 } 00:14:17.903 } 00:14:17.903 ]' 00:14:17.903 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.162 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:18.162 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.162 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:18.162 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.162 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.162 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.162 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.421 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:18.421 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:18.989 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.989 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:18.989 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.989 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.989 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.989 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.989 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:18.989 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:19.248 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:19.249 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:19.507 00:14:19.508 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.508 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.508 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.767 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.767 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.767 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.767 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.767 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.767 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.767 { 00:14:19.767 "cntlid": 55, 00:14:19.767 "qid": 0, 00:14:19.767 "state": "enabled", 00:14:19.767 "thread": "nvmf_tgt_poll_group_000", 00:14:19.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:19.767 "listen_address": { 00:14:19.767 "trtype": "TCP", 00:14:19.767 "adrfam": "IPv4", 00:14:19.767 "traddr": "10.0.0.3", 00:14:19.767 "trsvcid": "4420" 00:14:19.767 }, 00:14:19.767 "peer_address": { 00:14:19.767 "trtype": "TCP", 00:14:19.767 "adrfam": "IPv4", 00:14:19.767 "traddr": "10.0.0.1", 00:14:19.767 "trsvcid": "44172" 00:14:19.767 }, 00:14:19.767 "auth": { 00:14:19.767 "state": "completed", 00:14:19.767 "digest": "sha384", 00:14:19.767 "dhgroup": "null" 00:14:19.767 } 00:14:19.767 } 00:14:19.767 ]' 00:14:19.767 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.767 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.767 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.026 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:20.026 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.026 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.026 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.026 09:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.285 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:20.285 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:21.221 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.221 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:21.221 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.221 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.222 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.222 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:21.222 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.222 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:21.222 09:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.222 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.836 00:14:21.836 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.836 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.836 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.095 { 00:14:22.095 "cntlid": 57, 00:14:22.095 "qid": 0, 00:14:22.095 "state": "enabled", 00:14:22.095 "thread": "nvmf_tgt_poll_group_000", 00:14:22.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:22.095 "listen_address": { 00:14:22.095 "trtype": "TCP", 00:14:22.095 "adrfam": "IPv4", 00:14:22.095 "traddr": "10.0.0.3", 00:14:22.095 "trsvcid": "4420" 00:14:22.095 }, 00:14:22.095 "peer_address": { 00:14:22.095 "trtype": "TCP", 00:14:22.095 "adrfam": "IPv4", 00:14:22.095 "traddr": "10.0.0.1", 00:14:22.095 "trsvcid": "41432" 00:14:22.095 }, 00:14:22.095 "auth": { 00:14:22.095 "state": "completed", 00:14:22.095 "digest": "sha384", 00:14:22.095 "dhgroup": "ffdhe2048" 00:14:22.095 } 00:14:22.095 } 00:14:22.095 ]' 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.095 09:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.354 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:22.354 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:22.921 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.921 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:22.921 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.921 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.921 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.921 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.922 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:22.922 09:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:23.180 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:23.180 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.180 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:23.180 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:23.180 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:23.180 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.181 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.181 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.181 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.439 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.439 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.439 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.439 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.698 00:14:23.698 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.698 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.698 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.957 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.957 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.957 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.957 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.957 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.957 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.957 { 00:14:23.957 "cntlid": 59, 00:14:23.957 "qid": 0, 00:14:23.957 "state": "enabled", 00:14:23.957 "thread": "nvmf_tgt_poll_group_000", 00:14:23.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:23.957 "listen_address": { 00:14:23.957 "trtype": "TCP", 00:14:23.957 "adrfam": "IPv4", 00:14:23.957 "traddr": "10.0.0.3", 00:14:23.957 "trsvcid": "4420" 00:14:23.957 }, 00:14:23.957 "peer_address": { 00:14:23.957 "trtype": "TCP", 00:14:23.957 "adrfam": "IPv4", 00:14:23.957 "traddr": "10.0.0.1", 00:14:23.957 "trsvcid": "41464" 00:14:23.957 }, 00:14:23.957 "auth": { 00:14:23.957 "state": "completed", 00:14:23.957 "digest": "sha384", 00:14:23.957 "dhgroup": "ffdhe2048" 00:14:23.957 } 00:14:23.957 } 00:14:23.957 ]' 00:14:23.957 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.957 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.957 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.215 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.215 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.215 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.215 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.215 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.474 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:24.474 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:25.042 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.042 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:25.042 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.042 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.042 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.042 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.042 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.042 09:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.301 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.868 00:14:25.869 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.869 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.869 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.128 { 00:14:26.128 "cntlid": 61, 00:14:26.128 "qid": 0, 00:14:26.128 "state": "enabled", 00:14:26.128 "thread": "nvmf_tgt_poll_group_000", 00:14:26.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:26.128 "listen_address": { 00:14:26.128 "trtype": "TCP", 00:14:26.128 "adrfam": "IPv4", 00:14:26.128 "traddr": "10.0.0.3", 00:14:26.128 "trsvcid": "4420" 00:14:26.128 }, 00:14:26.128 "peer_address": { 00:14:26.128 "trtype": "TCP", 00:14:26.128 "adrfam": "IPv4", 00:14:26.128 "traddr": "10.0.0.1", 00:14:26.128 "trsvcid": "41482" 00:14:26.128 }, 00:14:26.128 "auth": { 00:14:26.128 "state": "completed", 00:14:26.128 "digest": "sha384", 00:14:26.128 "dhgroup": "ffdhe2048" 00:14:26.128 } 00:14:26.128 } 00:14:26.128 ]' 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.128 09:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.697 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:26.697 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:27.265 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.265 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:27.265 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.265 09:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.265 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.265 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.265 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:27.265 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.524 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.784 00:14:28.043 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.043 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.043 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.302 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.302 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.302 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.302 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.302 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.302 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.302 { 00:14:28.302 "cntlid": 63, 00:14:28.302 "qid": 0, 00:14:28.302 "state": "enabled", 00:14:28.302 "thread": "nvmf_tgt_poll_group_000", 00:14:28.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:28.302 "listen_address": { 00:14:28.302 "trtype": "TCP", 00:14:28.302 "adrfam": "IPv4", 00:14:28.302 "traddr": "10.0.0.3", 00:14:28.302 "trsvcid": "4420" 00:14:28.302 }, 00:14:28.302 "peer_address": { 00:14:28.302 "trtype": "TCP", 00:14:28.302 "adrfam": "IPv4", 00:14:28.302 "traddr": "10.0.0.1", 00:14:28.302 "trsvcid": "41510" 00:14:28.302 }, 00:14:28.302 "auth": { 00:14:28.302 "state": "completed", 00:14:28.302 "digest": "sha384", 00:14:28.302 "dhgroup": "ffdhe2048" 00:14:28.302 } 00:14:28.302 } 00:14:28.302 ]' 00:14:28.302 09:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.302 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:28.302 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.302 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.302 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.302 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.302 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.302 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.561 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:28.561 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:29.130 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.130 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:29.130 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.130 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.130 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.130 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.130 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.130 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:29.130 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.389 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.957 00:14:29.957 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.957 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.957 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.957 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.957 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.957 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.957 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.957 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.957 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.957 { 00:14:29.957 "cntlid": 65, 00:14:29.957 "qid": 0, 00:14:29.957 "state": "enabled", 00:14:29.957 "thread": "nvmf_tgt_poll_group_000", 00:14:29.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:29.957 "listen_address": { 00:14:29.957 "trtype": "TCP", 00:14:29.957 "adrfam": "IPv4", 00:14:29.957 "traddr": "10.0.0.3", 00:14:29.957 "trsvcid": "4420" 00:14:29.957 }, 00:14:29.958 "peer_address": { 00:14:29.958 "trtype": "TCP", 00:14:29.958 "adrfam": "IPv4", 00:14:29.958 "traddr": "10.0.0.1", 00:14:29.958 "trsvcid": "48900" 00:14:29.958 }, 00:14:29.958 "auth": { 00:14:29.958 "state": "completed", 00:14:29.958 "digest": "sha384", 00:14:29.958 "dhgroup": "ffdhe3072" 00:14:29.958 } 00:14:29.958 } 00:14:29.958 ]' 00:14:29.958 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.217 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:30.217 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.217 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.217 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.217 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.217 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.217 09:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.476 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:30.476 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:31.412 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.412 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:31.412 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.412 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.412 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.412 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.412 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:31.412 09:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.412 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.979 00:14:31.979 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.979 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.979 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.238 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.238 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.238 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.238 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.238 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.238 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.238 { 00:14:32.238 "cntlid": 67, 00:14:32.238 "qid": 0, 00:14:32.238 "state": "enabled", 00:14:32.238 "thread": "nvmf_tgt_poll_group_000", 00:14:32.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:32.238 "listen_address": { 00:14:32.238 "trtype": "TCP", 00:14:32.238 "adrfam": "IPv4", 00:14:32.238 "traddr": "10.0.0.3", 00:14:32.238 "trsvcid": "4420" 00:14:32.238 }, 00:14:32.238 "peer_address": { 00:14:32.238 "trtype": "TCP", 00:14:32.238 "adrfam": "IPv4", 00:14:32.238 "traddr": "10.0.0.1", 00:14:32.238 "trsvcid": "48922" 00:14:32.238 }, 00:14:32.238 "auth": { 00:14:32.238 "state": "completed", 00:14:32.238 "digest": "sha384", 00:14:32.238 "dhgroup": "ffdhe3072" 00:14:32.238 } 00:14:32.238 } 00:14:32.238 ]' 00:14:32.238 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.238 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.238 09:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.238 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.238 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.238 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.238 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.238 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.497 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:32.497 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:33.448 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.448 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:33.448 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.448 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.448 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.449 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.449 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:33.449 09:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.449 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.025 00:14:34.025 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.025 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.025 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.025 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.025 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.025 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.025 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.284 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.284 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.284 { 00:14:34.284 "cntlid": 69, 00:14:34.284 "qid": 0, 00:14:34.284 "state": "enabled", 00:14:34.284 "thread": "nvmf_tgt_poll_group_000", 00:14:34.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:34.284 "listen_address": { 00:14:34.284 "trtype": "TCP", 00:14:34.284 "adrfam": "IPv4", 00:14:34.284 "traddr": "10.0.0.3", 00:14:34.284 "trsvcid": "4420" 00:14:34.284 }, 00:14:34.284 "peer_address": { 00:14:34.284 "trtype": "TCP", 00:14:34.284 "adrfam": "IPv4", 00:14:34.284 "traddr": "10.0.0.1", 00:14:34.284 "trsvcid": "48960" 00:14:34.284 }, 00:14:34.284 "auth": { 00:14:34.284 "state": "completed", 00:14:34.284 "digest": "sha384", 00:14:34.284 "dhgroup": "ffdhe3072" 00:14:34.284 } 00:14:34.284 } 00:14:34.284 ]' 00:14:34.284 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.284 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.284 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.284 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:34.284 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.284 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.284 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.284 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.543 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:34.543 09:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.480 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.049 00:14:36.049 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.049 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.049 09:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.307 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.307 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.307 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.308 { 00:14:36.308 "cntlid": 71, 00:14:36.308 "qid": 0, 00:14:36.308 "state": "enabled", 00:14:36.308 "thread": "nvmf_tgt_poll_group_000", 00:14:36.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:36.308 "listen_address": { 00:14:36.308 "trtype": "TCP", 00:14:36.308 "adrfam": "IPv4", 00:14:36.308 "traddr": "10.0.0.3", 00:14:36.308 "trsvcid": "4420" 00:14:36.308 }, 00:14:36.308 "peer_address": { 00:14:36.308 "trtype": "TCP", 00:14:36.308 "adrfam": "IPv4", 00:14:36.308 "traddr": "10.0.0.1", 00:14:36.308 "trsvcid": "48984" 00:14:36.308 }, 00:14:36.308 "auth": { 00:14:36.308 "state": "completed", 00:14:36.308 "digest": "sha384", 00:14:36.308 "dhgroup": "ffdhe3072" 00:14:36.308 } 00:14:36.308 } 00:14:36.308 ]' 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.308 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.875 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:36.875 09:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:37.443 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.443 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:37.443 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.443 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.443 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.443 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.443 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.443 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:37.443 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.010 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.268 00:14:38.268 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.268 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.268 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.527 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.527 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.527 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.527 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.527 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.527 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.527 { 00:14:38.527 "cntlid": 73, 00:14:38.527 "qid": 0, 00:14:38.527 "state": "enabled", 00:14:38.527 "thread": "nvmf_tgt_poll_group_000", 00:14:38.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:38.527 "listen_address": { 00:14:38.527 "trtype": "TCP", 00:14:38.527 "adrfam": "IPv4", 00:14:38.527 "traddr": "10.0.0.3", 00:14:38.527 "trsvcid": "4420" 00:14:38.527 }, 00:14:38.527 "peer_address": { 00:14:38.527 "trtype": "TCP", 00:14:38.527 "adrfam": "IPv4", 00:14:38.527 "traddr": "10.0.0.1", 00:14:38.527 "trsvcid": "49014" 00:14:38.527 }, 00:14:38.527 "auth": { 00:14:38.527 "state": "completed", 00:14:38.527 "digest": "sha384", 00:14:38.527 "dhgroup": "ffdhe4096" 00:14:38.527 } 00:14:38.527 } 00:14:38.527 ]' 00:14:38.527 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.527 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:38.527 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.786 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.786 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.786 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.786 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.786 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.045 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:39.045 09:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:39.611 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.611 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:39.611 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.611 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.611 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.611 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.611 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:39.611 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.870 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.129 00:14:40.129 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.129 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.129 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.697 { 00:14:40.697 "cntlid": 75, 00:14:40.697 "qid": 0, 00:14:40.697 "state": "enabled", 00:14:40.697 "thread": "nvmf_tgt_poll_group_000", 00:14:40.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:40.697 "listen_address": { 00:14:40.697 "trtype": "TCP", 00:14:40.697 "adrfam": "IPv4", 00:14:40.697 "traddr": "10.0.0.3", 00:14:40.697 "trsvcid": "4420" 00:14:40.697 }, 00:14:40.697 "peer_address": { 00:14:40.697 "trtype": "TCP", 00:14:40.697 "adrfam": "IPv4", 00:14:40.697 "traddr": "10.0.0.1", 00:14:40.697 "trsvcid": "39556" 00:14:40.697 }, 00:14:40.697 "auth": { 00:14:40.697 "state": "completed", 00:14:40.697 "digest": "sha384", 00:14:40.697 "dhgroup": "ffdhe4096" 00:14:40.697 } 00:14:40.697 } 00:14:40.697 ]' 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.697 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.955 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:40.955 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.890 09:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.458 00:14:42.458 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.458 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.458 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.717 { 00:14:42.717 "cntlid": 77, 00:14:42.717 "qid": 0, 00:14:42.717 "state": "enabled", 00:14:42.717 "thread": "nvmf_tgt_poll_group_000", 00:14:42.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:42.717 "listen_address": { 00:14:42.717 "trtype": "TCP", 00:14:42.717 "adrfam": "IPv4", 00:14:42.717 "traddr": "10.0.0.3", 00:14:42.717 "trsvcid": "4420" 00:14:42.717 }, 00:14:42.717 "peer_address": { 00:14:42.717 "trtype": "TCP", 00:14:42.717 "adrfam": "IPv4", 00:14:42.717 "traddr": "10.0.0.1", 00:14:42.717 "trsvcid": "39582" 00:14:42.717 }, 00:14:42.717 "auth": { 00:14:42.717 "state": "completed", 00:14:42.717 "digest": "sha384", 00:14:42.717 "dhgroup": "ffdhe4096" 00:14:42.717 } 00:14:42.717 } 00:14:42.717 ]' 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.717 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.976 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:42.976 09:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:43.556 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.556 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:43.556 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.556 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.556 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.556 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.556 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:43.556 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.855 09:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.422 00:14:44.422 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.422 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.422 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.681 { 00:14:44.681 "cntlid": 79, 00:14:44.681 "qid": 0, 00:14:44.681 "state": "enabled", 00:14:44.681 "thread": "nvmf_tgt_poll_group_000", 00:14:44.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:44.681 "listen_address": { 00:14:44.681 "trtype": "TCP", 00:14:44.681 "adrfam": "IPv4", 00:14:44.681 "traddr": "10.0.0.3", 00:14:44.681 "trsvcid": "4420" 00:14:44.681 }, 00:14:44.681 "peer_address": { 00:14:44.681 "trtype": "TCP", 00:14:44.681 "adrfam": "IPv4", 00:14:44.681 "traddr": "10.0.0.1", 00:14:44.681 "trsvcid": "39592" 00:14:44.681 }, 00:14:44.681 "auth": { 00:14:44.681 "state": "completed", 00:14:44.681 "digest": "sha384", 00:14:44.681 "dhgroup": "ffdhe4096" 00:14:44.681 } 00:14:44.681 } 00:14:44.681 ]' 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.681 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.940 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.940 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.940 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.199 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:45.199 09:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:45.768 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.768 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:45.768 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.768 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.768 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.768 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.768 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.768 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:45.768 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.336 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.594 00:14:46.594 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.594 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.594 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.161 { 00:14:47.161 "cntlid": 81, 00:14:47.161 "qid": 0, 00:14:47.161 "state": "enabled", 00:14:47.161 "thread": "nvmf_tgt_poll_group_000", 00:14:47.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:47.161 "listen_address": { 00:14:47.161 "trtype": "TCP", 00:14:47.161 "adrfam": "IPv4", 00:14:47.161 "traddr": "10.0.0.3", 00:14:47.161 "trsvcid": "4420" 00:14:47.161 }, 00:14:47.161 "peer_address": { 00:14:47.161 "trtype": "TCP", 00:14:47.161 "adrfam": "IPv4", 00:14:47.161 "traddr": "10.0.0.1", 00:14:47.161 "trsvcid": "39616" 00:14:47.161 }, 00:14:47.161 "auth": { 00:14:47.161 "state": "completed", 00:14:47.161 "digest": "sha384", 00:14:47.161 "dhgroup": "ffdhe6144" 00:14:47.161 } 00:14:47.161 } 00:14:47.161 ]' 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.161 09:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.420 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:47.420 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:48.356 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.356 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:48.356 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.356 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.356 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.356 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.356 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:48.356 09:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.356 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.357 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.357 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.925 00:14:48.925 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.925 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.925 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.184 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.184 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.184 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.184 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.184 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.184 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.184 { 00:14:49.184 "cntlid": 83, 00:14:49.184 "qid": 0, 00:14:49.184 "state": "enabled", 00:14:49.184 "thread": "nvmf_tgt_poll_group_000", 00:14:49.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:49.184 "listen_address": { 00:14:49.184 "trtype": "TCP", 00:14:49.184 "adrfam": "IPv4", 00:14:49.184 "traddr": "10.0.0.3", 00:14:49.184 "trsvcid": "4420" 00:14:49.184 }, 00:14:49.184 "peer_address": { 00:14:49.184 "trtype": "TCP", 00:14:49.184 "adrfam": "IPv4", 00:14:49.184 "traddr": "10.0.0.1", 00:14:49.184 "trsvcid": "39652" 00:14:49.184 }, 00:14:49.184 "auth": { 00:14:49.184 "state": "completed", 00:14:49.184 "digest": "sha384", 00:14:49.184 "dhgroup": "ffdhe6144" 00:14:49.184 } 00:14:49.184 } 00:14:49.184 ]' 00:14:49.184 09:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.184 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:49.184 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.443 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:49.443 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.443 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.443 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.443 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.702 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:49.702 09:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:50.271 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.271 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:50.271 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.271 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.271 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.271 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.271 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:50.271 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:50.531 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:50.531 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.531 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:50.531 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:50.531 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:50.531 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.531 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.531 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.531 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.790 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.790 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.790 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.790 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.049 00:14:51.050 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.050 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.050 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.309 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.309 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.309 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.309 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.309 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.309 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.309 { 00:14:51.309 "cntlid": 85, 00:14:51.309 "qid": 0, 00:14:51.309 "state": "enabled", 00:14:51.309 "thread": "nvmf_tgt_poll_group_000", 00:14:51.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:51.309 "listen_address": { 00:14:51.309 "trtype": "TCP", 00:14:51.309 "adrfam": "IPv4", 00:14:51.309 "traddr": "10.0.0.3", 00:14:51.309 "trsvcid": "4420" 00:14:51.309 }, 00:14:51.309 "peer_address": { 00:14:51.309 "trtype": "TCP", 00:14:51.309 "adrfam": "IPv4", 00:14:51.309 "traddr": "10.0.0.1", 00:14:51.309 "trsvcid": "42960" 00:14:51.309 }, 00:14:51.309 "auth": { 00:14:51.309 "state": "completed", 00:14:51.309 "digest": "sha384", 00:14:51.309 "dhgroup": "ffdhe6144" 00:14:51.309 } 00:14:51.309 } 00:14:51.309 ]' 00:14:51.309 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.309 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.309 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.567 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.567 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.567 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.567 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.567 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.826 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:51.826 09:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:14:52.394 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.394 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:52.394 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.394 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.394 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.394 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.394 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:52.394 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.653 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:52.654 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.654 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.222 00:14:53.222 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.222 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.222 09:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.489 { 00:14:53.489 "cntlid": 87, 00:14:53.489 "qid": 0, 00:14:53.489 "state": "enabled", 00:14:53.489 "thread": "nvmf_tgt_poll_group_000", 00:14:53.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:53.489 "listen_address": { 00:14:53.489 "trtype": "TCP", 00:14:53.489 "adrfam": "IPv4", 00:14:53.489 "traddr": "10.0.0.3", 00:14:53.489 "trsvcid": "4420" 00:14:53.489 }, 00:14:53.489 "peer_address": { 00:14:53.489 "trtype": "TCP", 00:14:53.489 "adrfam": "IPv4", 00:14:53.489 "traddr": "10.0.0.1", 00:14:53.489 "trsvcid": "42974" 00:14:53.489 }, 00:14:53.489 "auth": { 00:14:53.489 "state": "completed", 00:14:53.489 "digest": "sha384", 00:14:53.489 "dhgroup": "ffdhe6144" 00:14:53.489 } 00:14:53.489 } 00:14:53.489 ]' 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.489 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.764 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:53.764 09:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:14:54.702 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.702 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:54.702 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.702 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.702 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.702 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.702 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.702 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:54.702 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.961 09:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.529 00:14:55.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.529 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.788 { 00:14:55.788 "cntlid": 89, 00:14:55.788 "qid": 0, 00:14:55.788 "state": "enabled", 00:14:55.788 "thread": "nvmf_tgt_poll_group_000", 00:14:55.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:55.788 "listen_address": { 00:14:55.788 "trtype": "TCP", 00:14:55.788 "adrfam": "IPv4", 00:14:55.788 "traddr": "10.0.0.3", 00:14:55.788 "trsvcid": "4420" 00:14:55.788 }, 00:14:55.788 "peer_address": { 00:14:55.788 "trtype": "TCP", 00:14:55.788 "adrfam": "IPv4", 00:14:55.788 "traddr": "10.0.0.1", 00:14:55.788 "trsvcid": "43006" 00:14:55.788 }, 00:14:55.788 "auth": { 00:14:55.788 "state": "completed", 00:14:55.788 "digest": "sha384", 00:14:55.788 "dhgroup": "ffdhe8192" 00:14:55.788 } 00:14:55.788 } 00:14:55.788 ]' 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:55.788 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.047 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.047 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.047 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.306 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:56.306 09:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:14:56.873 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.874 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:56.874 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.874 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.874 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.874 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.874 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:56.874 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.133 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.700 00:14:57.700 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.700 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.700 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.268 { 00:14:58.268 "cntlid": 91, 00:14:58.268 "qid": 0, 00:14:58.268 "state": "enabled", 00:14:58.268 "thread": "nvmf_tgt_poll_group_000", 00:14:58.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:14:58.268 "listen_address": { 00:14:58.268 "trtype": "TCP", 00:14:58.268 "adrfam": "IPv4", 00:14:58.268 "traddr": "10.0.0.3", 00:14:58.268 "trsvcid": "4420" 00:14:58.268 }, 00:14:58.268 "peer_address": { 00:14:58.268 "trtype": "TCP", 00:14:58.268 "adrfam": "IPv4", 00:14:58.268 "traddr": "10.0.0.1", 00:14:58.268 "trsvcid": "43020" 00:14:58.268 }, 00:14:58.268 "auth": { 00:14:58.268 "state": "completed", 00:14:58.268 "digest": "sha384", 00:14:58.268 "dhgroup": "ffdhe8192" 00:14:58.268 } 00:14:58.268 } 00:14:58.268 ]' 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:58.268 09:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.268 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.268 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.268 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.527 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:58.527 09:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.464 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.036 00:15:00.036 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.036 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.036 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.295 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.295 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.295 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.295 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.554 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.554 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.554 { 00:15:00.554 "cntlid": 93, 00:15:00.554 "qid": 0, 00:15:00.554 "state": "enabled", 00:15:00.554 "thread": "nvmf_tgt_poll_group_000", 00:15:00.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:00.554 "listen_address": { 00:15:00.554 "trtype": "TCP", 00:15:00.554 "adrfam": "IPv4", 00:15:00.554 "traddr": "10.0.0.3", 00:15:00.554 "trsvcid": "4420" 00:15:00.554 }, 00:15:00.554 "peer_address": { 00:15:00.554 "trtype": "TCP", 00:15:00.554 "adrfam": "IPv4", 00:15:00.554 "traddr": "10.0.0.1", 00:15:00.554 "trsvcid": "52728" 00:15:00.554 }, 00:15:00.554 "auth": { 00:15:00.554 "state": "completed", 00:15:00.554 "digest": "sha384", 00:15:00.554 "dhgroup": "ffdhe8192" 00:15:00.554 } 00:15:00.554 } 00:15:00.554 ]' 00:15:00.554 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.554 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.554 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.554 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.554 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.554 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.554 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.554 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.813 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:00.813 09:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.750 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.687 00:15:02.688 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.688 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.688 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.688 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.688 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.688 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.688 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.688 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.688 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.688 { 00:15:02.688 "cntlid": 95, 00:15:02.688 "qid": 0, 00:15:02.688 "state": "enabled", 00:15:02.688 "thread": "nvmf_tgt_poll_group_000", 00:15:02.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:02.688 "listen_address": { 00:15:02.688 "trtype": "TCP", 00:15:02.688 "adrfam": "IPv4", 00:15:02.688 "traddr": "10.0.0.3", 00:15:02.688 "trsvcid": "4420" 00:15:02.688 }, 00:15:02.688 "peer_address": { 00:15:02.688 "trtype": "TCP", 00:15:02.688 "adrfam": "IPv4", 00:15:02.688 "traddr": "10.0.0.1", 00:15:02.688 "trsvcid": "52760" 00:15:02.688 }, 00:15:02.688 "auth": { 00:15:02.688 "state": "completed", 00:15:02.688 "digest": "sha384", 00:15:02.688 "dhgroup": "ffdhe8192" 00:15:02.688 } 00:15:02.688 } 00:15:02.688 ]' 00:15:02.688 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.947 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.947 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.947 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:02.947 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.947 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.947 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.947 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.218 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:03.218 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:03.799 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.799 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:03.799 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.799 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.799 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.799 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:03.799 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.799 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.799 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:03.799 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.058 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.625 00:15:04.625 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.625 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.625 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.625 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.625 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.625 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.626 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.884 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.884 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.884 { 00:15:04.884 "cntlid": 97, 00:15:04.884 "qid": 0, 00:15:04.884 "state": "enabled", 00:15:04.884 "thread": "nvmf_tgt_poll_group_000", 00:15:04.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:04.884 "listen_address": { 00:15:04.884 "trtype": "TCP", 00:15:04.884 "adrfam": "IPv4", 00:15:04.884 "traddr": "10.0.0.3", 00:15:04.884 "trsvcid": "4420" 00:15:04.884 }, 00:15:04.884 "peer_address": { 00:15:04.884 "trtype": "TCP", 00:15:04.884 "adrfam": "IPv4", 00:15:04.884 "traddr": "10.0.0.1", 00:15:04.884 "trsvcid": "52782" 00:15:04.884 }, 00:15:04.884 "auth": { 00:15:04.884 "state": "completed", 00:15:04.884 "digest": "sha512", 00:15:04.884 "dhgroup": "null" 00:15:04.884 } 00:15:04.884 } 00:15:04.884 ]' 00:15:04.884 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.884 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.884 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.884 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:04.884 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.884 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.884 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.884 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.143 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:05.143 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:05.711 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.711 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:05.711 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.711 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.711 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.711 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.711 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:05.711 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.969 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.970 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.970 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.228 00:15:06.487 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.487 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.487 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.745 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.745 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.745 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.745 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.745 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.745 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.745 { 00:15:06.745 "cntlid": 99, 00:15:06.745 "qid": 0, 00:15:06.746 "state": "enabled", 00:15:06.746 "thread": "nvmf_tgt_poll_group_000", 00:15:06.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:06.746 "listen_address": { 00:15:06.746 "trtype": "TCP", 00:15:06.746 "adrfam": "IPv4", 00:15:06.746 "traddr": "10.0.0.3", 00:15:06.746 "trsvcid": "4420" 00:15:06.746 }, 00:15:06.746 "peer_address": { 00:15:06.746 "trtype": "TCP", 00:15:06.746 "adrfam": "IPv4", 00:15:06.746 "traddr": "10.0.0.1", 00:15:06.746 "trsvcid": "52812" 00:15:06.746 }, 00:15:06.746 "auth": { 00:15:06.746 "state": "completed", 00:15:06.746 "digest": "sha512", 00:15:06.746 "dhgroup": "null" 00:15:06.746 } 00:15:06.746 } 00:15:06.746 ]' 00:15:06.746 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.746 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.746 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.746 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:06.746 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.746 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.746 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.746 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.004 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:07.004 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:07.572 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.572 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:07.572 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.572 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.572 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.572 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.572 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:07.572 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.832 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.401 00:15:08.401 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.401 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.401 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.659 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.659 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.659 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.660 { 00:15:08.660 "cntlid": 101, 00:15:08.660 "qid": 0, 00:15:08.660 "state": "enabled", 00:15:08.660 "thread": "nvmf_tgt_poll_group_000", 00:15:08.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:08.660 "listen_address": { 00:15:08.660 "trtype": "TCP", 00:15:08.660 "adrfam": "IPv4", 00:15:08.660 "traddr": "10.0.0.3", 00:15:08.660 "trsvcid": "4420" 00:15:08.660 }, 00:15:08.660 "peer_address": { 00:15:08.660 "trtype": "TCP", 00:15:08.660 "adrfam": "IPv4", 00:15:08.660 "traddr": "10.0.0.1", 00:15:08.660 "trsvcid": "52848" 00:15:08.660 }, 00:15:08.660 "auth": { 00:15:08.660 "state": "completed", 00:15:08.660 "digest": "sha512", 00:15:08.660 "dhgroup": "null" 00:15:08.660 } 00:15:08.660 } 00:15:08.660 ]' 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.660 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.227 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:09.227 09:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:09.795 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.795 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:09.795 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.795 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.795 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.795 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.795 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:09.795 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.054 09:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.312 00:15:10.312 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.312 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.312 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.571 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.571 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.571 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.571 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.571 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.571 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.571 { 00:15:10.571 "cntlid": 103, 00:15:10.571 "qid": 0, 00:15:10.571 "state": "enabled", 00:15:10.571 "thread": "nvmf_tgt_poll_group_000", 00:15:10.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:10.571 "listen_address": { 00:15:10.571 "trtype": "TCP", 00:15:10.571 "adrfam": "IPv4", 00:15:10.571 "traddr": "10.0.0.3", 00:15:10.571 "trsvcid": "4420" 00:15:10.571 }, 00:15:10.571 "peer_address": { 00:15:10.571 "trtype": "TCP", 00:15:10.571 "adrfam": "IPv4", 00:15:10.571 "traddr": "10.0.0.1", 00:15:10.571 "trsvcid": "59218" 00:15:10.571 }, 00:15:10.571 "auth": { 00:15:10.571 "state": "completed", 00:15:10.571 "digest": "sha512", 00:15:10.571 "dhgroup": "null" 00:15:10.571 } 00:15:10.571 } 00:15:10.571 ]' 00:15:10.571 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.830 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.830 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.830 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:10.830 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.830 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.830 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.830 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.089 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:11.089 09:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:11.660 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.660 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:11.660 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.660 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.918 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.918 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:11.918 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.918 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.919 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.485 00:15:12.485 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.485 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.485 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.744 { 00:15:12.744 "cntlid": 105, 00:15:12.744 "qid": 0, 00:15:12.744 "state": "enabled", 00:15:12.744 "thread": "nvmf_tgt_poll_group_000", 00:15:12.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:12.744 "listen_address": { 00:15:12.744 "trtype": "TCP", 00:15:12.744 "adrfam": "IPv4", 00:15:12.744 "traddr": "10.0.0.3", 00:15:12.744 "trsvcid": "4420" 00:15:12.744 }, 00:15:12.744 "peer_address": { 00:15:12.744 "trtype": "TCP", 00:15:12.744 "adrfam": "IPv4", 00:15:12.744 "traddr": "10.0.0.1", 00:15:12.744 "trsvcid": "59244" 00:15:12.744 }, 00:15:12.744 "auth": { 00:15:12.744 "state": "completed", 00:15:12.744 "digest": "sha512", 00:15:12.744 "dhgroup": "ffdhe2048" 00:15:12.744 } 00:15:12.744 } 00:15:12.744 ]' 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.744 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.012 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:13.012 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:13.594 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.595 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:13.595 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.595 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.595 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.595 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.595 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:13.595 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.852 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.420 00:15:14.420 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.420 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.420 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.679 { 00:15:14.679 "cntlid": 107, 00:15:14.679 "qid": 0, 00:15:14.679 "state": "enabled", 00:15:14.679 "thread": "nvmf_tgt_poll_group_000", 00:15:14.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:14.679 "listen_address": { 00:15:14.679 "trtype": "TCP", 00:15:14.679 "adrfam": "IPv4", 00:15:14.679 "traddr": "10.0.0.3", 00:15:14.679 "trsvcid": "4420" 00:15:14.679 }, 00:15:14.679 "peer_address": { 00:15:14.679 "trtype": "TCP", 00:15:14.679 "adrfam": "IPv4", 00:15:14.679 "traddr": "10.0.0.1", 00:15:14.679 "trsvcid": "59276" 00:15:14.679 }, 00:15:14.679 "auth": { 00:15:14.679 "state": "completed", 00:15:14.679 "digest": "sha512", 00:15:14.679 "dhgroup": "ffdhe2048" 00:15:14.679 } 00:15:14.679 } 00:15:14.679 ]' 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.679 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.938 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:14.938 09:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.875 09:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.442 00:15:16.442 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.442 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.442 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.701 { 00:15:16.701 "cntlid": 109, 00:15:16.701 "qid": 0, 00:15:16.701 "state": "enabled", 00:15:16.701 "thread": "nvmf_tgt_poll_group_000", 00:15:16.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:16.701 "listen_address": { 00:15:16.701 "trtype": "TCP", 00:15:16.701 "adrfam": "IPv4", 00:15:16.701 "traddr": "10.0.0.3", 00:15:16.701 "trsvcid": "4420" 00:15:16.701 }, 00:15:16.701 "peer_address": { 00:15:16.701 "trtype": "TCP", 00:15:16.701 "adrfam": "IPv4", 00:15:16.701 "traddr": "10.0.0.1", 00:15:16.701 "trsvcid": "59300" 00:15:16.701 }, 00:15:16.701 "auth": { 00:15:16.701 "state": "completed", 00:15:16.701 "digest": "sha512", 00:15:16.701 "dhgroup": "ffdhe2048" 00:15:16.701 } 00:15:16.701 } 00:15:16.701 ]' 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.701 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.702 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.985 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:16.985 09:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:17.552 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.812 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.071 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.071 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.071 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.071 09:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.330 00:15:18.330 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.330 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.330 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.590 { 00:15:18.590 "cntlid": 111, 00:15:18.590 "qid": 0, 00:15:18.590 "state": "enabled", 00:15:18.590 "thread": "nvmf_tgt_poll_group_000", 00:15:18.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:18.590 "listen_address": { 00:15:18.590 "trtype": "TCP", 00:15:18.590 "adrfam": "IPv4", 00:15:18.590 "traddr": "10.0.0.3", 00:15:18.590 "trsvcid": "4420" 00:15:18.590 }, 00:15:18.590 "peer_address": { 00:15:18.590 "trtype": "TCP", 00:15:18.590 "adrfam": "IPv4", 00:15:18.590 "traddr": "10.0.0.1", 00:15:18.590 "trsvcid": "59324" 00:15:18.590 }, 00:15:18.590 "auth": { 00:15:18.590 "state": "completed", 00:15:18.590 "digest": "sha512", 00:15:18.590 "dhgroup": "ffdhe2048" 00:15:18.590 } 00:15:18.590 } 00:15:18.590 ]' 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.590 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.849 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.849 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.849 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.108 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:19.108 09:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:19.677 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.677 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:19.677 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.677 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.677 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.677 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.677 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.677 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:19.677 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.936 09:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.503 00:15:20.503 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.503 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.503 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.762 { 00:15:20.762 "cntlid": 113, 00:15:20.762 "qid": 0, 00:15:20.762 "state": "enabled", 00:15:20.762 "thread": "nvmf_tgt_poll_group_000", 00:15:20.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:20.762 "listen_address": { 00:15:20.762 "trtype": "TCP", 00:15:20.762 "adrfam": "IPv4", 00:15:20.762 "traddr": "10.0.0.3", 00:15:20.762 "trsvcid": "4420" 00:15:20.762 }, 00:15:20.762 "peer_address": { 00:15:20.762 "trtype": "TCP", 00:15:20.762 "adrfam": "IPv4", 00:15:20.762 "traddr": "10.0.0.1", 00:15:20.762 "trsvcid": "56946" 00:15:20.762 }, 00:15:20.762 "auth": { 00:15:20.762 "state": "completed", 00:15:20.762 "digest": "sha512", 00:15:20.762 "dhgroup": "ffdhe3072" 00:15:20.762 } 00:15:20.762 } 00:15:20.762 ]' 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.762 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.021 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:21.021 09:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:21.590 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.590 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:21.590 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.590 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.590 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.590 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.590 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:21.590 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.158 09:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.418 00:15:22.418 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.418 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.418 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.682 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.682 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.682 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.682 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.682 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.682 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.682 { 00:15:22.682 "cntlid": 115, 00:15:22.682 "qid": 0, 00:15:22.683 "state": "enabled", 00:15:22.683 "thread": "nvmf_tgt_poll_group_000", 00:15:22.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:22.683 "listen_address": { 00:15:22.683 "trtype": "TCP", 00:15:22.683 "adrfam": "IPv4", 00:15:22.683 "traddr": "10.0.0.3", 00:15:22.683 "trsvcid": "4420" 00:15:22.683 }, 00:15:22.683 "peer_address": { 00:15:22.683 "trtype": "TCP", 00:15:22.683 "adrfam": "IPv4", 00:15:22.683 "traddr": "10.0.0.1", 00:15:22.683 "trsvcid": "56964" 00:15:22.683 }, 00:15:22.683 "auth": { 00:15:22.683 "state": "completed", 00:15:22.683 "digest": "sha512", 00:15:22.683 "dhgroup": "ffdhe3072" 00:15:22.683 } 00:15:22.683 } 00:15:22.683 ]' 00:15:22.683 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.683 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.683 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.683 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.683 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.683 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.683 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.683 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.946 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:22.946 09:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:23.882 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.882 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:23.882 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.882 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.882 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.882 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.882 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:23.882 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.140 09:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.707 00:15:24.707 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.707 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.707 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.966 { 00:15:24.966 "cntlid": 117, 00:15:24.966 "qid": 0, 00:15:24.966 "state": "enabled", 00:15:24.966 "thread": "nvmf_tgt_poll_group_000", 00:15:24.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:24.966 "listen_address": { 00:15:24.966 "trtype": "TCP", 00:15:24.966 "adrfam": "IPv4", 00:15:24.966 "traddr": "10.0.0.3", 00:15:24.966 "trsvcid": "4420" 00:15:24.966 }, 00:15:24.966 "peer_address": { 00:15:24.966 "trtype": "TCP", 00:15:24.966 "adrfam": "IPv4", 00:15:24.966 "traddr": "10.0.0.1", 00:15:24.966 "trsvcid": "56986" 00:15:24.966 }, 00:15:24.966 "auth": { 00:15:24.966 "state": "completed", 00:15:24.966 "digest": "sha512", 00:15:24.966 "dhgroup": "ffdhe3072" 00:15:24.966 } 00:15:24.966 } 00:15:24.966 ]' 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.966 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.225 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.225 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.225 09:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.483 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:25.483 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:26.418 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.418 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:26.418 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.418 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.418 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.418 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.418 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:26.418 09:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.677 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.936 00:15:26.936 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.936 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.936 09:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.503 { 00:15:27.503 "cntlid": 119, 00:15:27.503 "qid": 0, 00:15:27.503 "state": "enabled", 00:15:27.503 "thread": "nvmf_tgt_poll_group_000", 00:15:27.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:27.503 "listen_address": { 00:15:27.503 "trtype": "TCP", 00:15:27.503 "adrfam": "IPv4", 00:15:27.503 "traddr": "10.0.0.3", 00:15:27.503 "trsvcid": "4420" 00:15:27.503 }, 00:15:27.503 "peer_address": { 00:15:27.503 "trtype": "TCP", 00:15:27.503 "adrfam": "IPv4", 00:15:27.503 "traddr": "10.0.0.1", 00:15:27.503 "trsvcid": "56998" 00:15:27.503 }, 00:15:27.503 "auth": { 00:15:27.503 "state": "completed", 00:15:27.503 "digest": "sha512", 00:15:27.503 "dhgroup": "ffdhe3072" 00:15:27.503 } 00:15:27.503 } 00:15:27.503 ]' 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.503 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.762 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:27.762 09:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:28.697 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.697 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:28.697 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.697 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.697 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.697 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.697 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.697 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.697 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.955 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:28.955 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.955 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:28.955 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.955 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:28.956 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.956 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.956 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.956 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.956 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.956 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.956 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.956 09:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.214 00:15:29.214 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.214 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.214 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.782 { 00:15:29.782 "cntlid": 121, 00:15:29.782 "qid": 0, 00:15:29.782 "state": "enabled", 00:15:29.782 "thread": "nvmf_tgt_poll_group_000", 00:15:29.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:29.782 "listen_address": { 00:15:29.782 "trtype": "TCP", 00:15:29.782 "adrfam": "IPv4", 00:15:29.782 "traddr": "10.0.0.3", 00:15:29.782 "trsvcid": "4420" 00:15:29.782 }, 00:15:29.782 "peer_address": { 00:15:29.782 "trtype": "TCP", 00:15:29.782 "adrfam": "IPv4", 00:15:29.782 "traddr": "10.0.0.1", 00:15:29.782 "trsvcid": "57024" 00:15:29.782 }, 00:15:29.782 "auth": { 00:15:29.782 "state": "completed", 00:15:29.782 "digest": "sha512", 00:15:29.782 "dhgroup": "ffdhe4096" 00:15:29.782 } 00:15:29.782 } 00:15:29.782 ]' 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.782 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.041 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:30.041 09:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.976 09:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.542 00:15:31.542 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.542 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.542 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.801 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.801 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.801 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.801 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.801 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.801 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.801 { 00:15:31.801 "cntlid": 123, 00:15:31.801 "qid": 0, 00:15:31.801 "state": "enabled", 00:15:31.801 "thread": "nvmf_tgt_poll_group_000", 00:15:31.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:31.801 "listen_address": { 00:15:31.801 "trtype": "TCP", 00:15:31.801 "adrfam": "IPv4", 00:15:31.801 "traddr": "10.0.0.3", 00:15:31.801 "trsvcid": "4420" 00:15:31.801 }, 00:15:31.801 "peer_address": { 00:15:31.801 "trtype": "TCP", 00:15:31.801 "adrfam": "IPv4", 00:15:31.801 "traddr": "10.0.0.1", 00:15:31.801 "trsvcid": "53744" 00:15:31.801 }, 00:15:31.801 "auth": { 00:15:31.801 "state": "completed", 00:15:31.801 "digest": "sha512", 00:15:31.801 "dhgroup": "ffdhe4096" 00:15:31.801 } 00:15:31.801 } 00:15:31.801 ]' 00:15:31.801 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.801 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.801 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.060 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:32.060 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.060 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.060 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.060 09:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.319 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:32.319 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:32.913 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.913 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:32.913 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.913 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.913 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.913 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.913 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:32.913 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:33.173 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:33.173 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.173 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:33.173 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:33.173 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:33.173 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.173 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.173 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.173 09:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.174 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.174 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.174 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.174 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.741 00:15:33.741 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.741 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.741 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.999 { 00:15:33.999 "cntlid": 125, 00:15:33.999 "qid": 0, 00:15:33.999 "state": "enabled", 00:15:33.999 "thread": "nvmf_tgt_poll_group_000", 00:15:33.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:33.999 "listen_address": { 00:15:33.999 "trtype": "TCP", 00:15:33.999 "adrfam": "IPv4", 00:15:33.999 "traddr": "10.0.0.3", 00:15:33.999 "trsvcid": "4420" 00:15:33.999 }, 00:15:33.999 "peer_address": { 00:15:33.999 "trtype": "TCP", 00:15:33.999 "adrfam": "IPv4", 00:15:33.999 "traddr": "10.0.0.1", 00:15:33.999 "trsvcid": "53770" 00:15:33.999 }, 00:15:33.999 "auth": { 00:15:33.999 "state": "completed", 00:15:33.999 "digest": "sha512", 00:15:33.999 "dhgroup": "ffdhe4096" 00:15:33.999 } 00:15:33.999 } 00:15:33.999 ]' 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.999 09:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.566 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:34.566 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:35.133 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.133 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:35.133 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.133 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.133 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.133 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.133 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:35.133 09:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.392 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.958 00:15:35.958 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.958 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.958 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.217 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.217 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.217 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.217 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.217 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.217 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.217 { 00:15:36.217 "cntlid": 127, 00:15:36.217 "qid": 0, 00:15:36.217 "state": "enabled", 00:15:36.217 "thread": "nvmf_tgt_poll_group_000", 00:15:36.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:36.217 "listen_address": { 00:15:36.217 "trtype": "TCP", 00:15:36.217 "adrfam": "IPv4", 00:15:36.217 "traddr": "10.0.0.3", 00:15:36.217 "trsvcid": "4420" 00:15:36.217 }, 00:15:36.217 "peer_address": { 00:15:36.217 "trtype": "TCP", 00:15:36.217 "adrfam": "IPv4", 00:15:36.217 "traddr": "10.0.0.1", 00:15:36.217 "trsvcid": "53780" 00:15:36.217 }, 00:15:36.217 "auth": { 00:15:36.217 "state": "completed", 00:15:36.217 "digest": "sha512", 00:15:36.217 "dhgroup": "ffdhe4096" 00:15:36.217 } 00:15:36.217 } 00:15:36.217 ]' 00:15:36.217 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.217 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:36.217 09:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.217 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:36.217 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.217 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.217 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.217 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.475 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:36.475 09:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:37.410 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.410 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:37.410 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.410 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.410 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.410 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.410 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.410 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:37.410 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.668 09:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.233 00:15:38.233 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.233 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.233 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.800 { 00:15:38.800 "cntlid": 129, 00:15:38.800 "qid": 0, 00:15:38.800 "state": "enabled", 00:15:38.800 "thread": "nvmf_tgt_poll_group_000", 00:15:38.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:38.800 "listen_address": { 00:15:38.800 "trtype": "TCP", 00:15:38.800 "adrfam": "IPv4", 00:15:38.800 "traddr": "10.0.0.3", 00:15:38.800 "trsvcid": "4420" 00:15:38.800 }, 00:15:38.800 "peer_address": { 00:15:38.800 "trtype": "TCP", 00:15:38.800 "adrfam": "IPv4", 00:15:38.800 "traddr": "10.0.0.1", 00:15:38.800 "trsvcid": "53808" 00:15:38.800 }, 00:15:38.800 "auth": { 00:15:38.800 "state": "completed", 00:15:38.800 "digest": "sha512", 00:15:38.800 "dhgroup": "ffdhe6144" 00:15:38.800 } 00:15:38.800 } 00:15:38.800 ]' 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.800 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.059 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:39.059 09:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:39.996 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.996 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:39.996 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.996 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.996 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.996 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.996 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:39.996 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.255 09:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.823 00:15:40.823 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.823 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.823 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.082 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.082 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.082 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.082 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.082 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.082 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.082 { 00:15:41.082 "cntlid": 131, 00:15:41.082 "qid": 0, 00:15:41.082 "state": "enabled", 00:15:41.082 "thread": "nvmf_tgt_poll_group_000", 00:15:41.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:41.082 "listen_address": { 00:15:41.083 "trtype": "TCP", 00:15:41.083 "adrfam": "IPv4", 00:15:41.083 "traddr": "10.0.0.3", 00:15:41.083 "trsvcid": "4420" 00:15:41.083 }, 00:15:41.083 "peer_address": { 00:15:41.083 "trtype": "TCP", 00:15:41.083 "adrfam": "IPv4", 00:15:41.083 "traddr": "10.0.0.1", 00:15:41.083 "trsvcid": "37712" 00:15:41.083 }, 00:15:41.083 "auth": { 00:15:41.083 "state": "completed", 00:15:41.083 "digest": "sha512", 00:15:41.083 "dhgroup": "ffdhe6144" 00:15:41.083 } 00:15:41.083 } 00:15:41.083 ]' 00:15:41.083 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.083 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.083 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.083 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:41.083 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.083 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.083 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.083 09:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.344 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:41.344 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:42.281 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.281 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:42.281 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.281 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.281 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.281 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.281 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:42.281 09:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.539 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.123 00:15:43.123 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.123 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.123 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.123 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.123 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.123 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.123 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.123 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.124 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.124 { 00:15:43.124 "cntlid": 133, 00:15:43.124 "qid": 0, 00:15:43.124 "state": "enabled", 00:15:43.124 "thread": "nvmf_tgt_poll_group_000", 00:15:43.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:43.124 "listen_address": { 00:15:43.124 "trtype": "TCP", 00:15:43.124 "adrfam": "IPv4", 00:15:43.124 "traddr": "10.0.0.3", 00:15:43.124 "trsvcid": "4420" 00:15:43.124 }, 00:15:43.124 "peer_address": { 00:15:43.124 "trtype": "TCP", 00:15:43.124 "adrfam": "IPv4", 00:15:43.124 "traddr": "10.0.0.1", 00:15:43.124 "trsvcid": "37732" 00:15:43.124 }, 00:15:43.124 "auth": { 00:15:43.124 "state": "completed", 00:15:43.124 "digest": "sha512", 00:15:43.124 "dhgroup": "ffdhe6144" 00:15:43.124 } 00:15:43.124 } 00:15:43.124 ]' 00:15:43.124 09:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.383 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.383 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.383 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:43.383 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.383 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.383 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.383 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.642 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:43.642 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:44.210 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.210 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:44.210 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.210 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.210 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.210 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.210 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:44.210 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:44.469 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.036 00:15:45.036 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.036 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.036 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.295 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.295 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.295 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.295 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.295 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.295 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.295 { 00:15:45.295 "cntlid": 135, 00:15:45.295 "qid": 0, 00:15:45.295 "state": "enabled", 00:15:45.295 "thread": "nvmf_tgt_poll_group_000", 00:15:45.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:45.295 "listen_address": { 00:15:45.295 "trtype": "TCP", 00:15:45.295 "adrfam": "IPv4", 00:15:45.295 "traddr": "10.0.0.3", 00:15:45.295 "trsvcid": "4420" 00:15:45.295 }, 00:15:45.295 "peer_address": { 00:15:45.295 "trtype": "TCP", 00:15:45.295 "adrfam": "IPv4", 00:15:45.295 "traddr": "10.0.0.1", 00:15:45.295 "trsvcid": "37758" 00:15:45.295 }, 00:15:45.295 "auth": { 00:15:45.295 "state": "completed", 00:15:45.295 "digest": "sha512", 00:15:45.295 "dhgroup": "ffdhe6144" 00:15:45.295 } 00:15:45.295 } 00:15:45.295 ]' 00:15:45.295 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.295 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.295 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.554 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:45.554 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.554 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.554 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.554 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.813 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:45.813 09:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:46.381 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.381 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:46.381 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.381 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.381 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.381 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.381 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.381 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.381 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.640 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.641 09:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.209 00:15:47.467 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.467 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.467 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.467 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.467 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.467 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.467 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.727 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.727 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.727 { 00:15:47.727 "cntlid": 137, 00:15:47.727 "qid": 0, 00:15:47.727 "state": "enabled", 00:15:47.727 "thread": "nvmf_tgt_poll_group_000", 00:15:47.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:47.727 "listen_address": { 00:15:47.727 "trtype": "TCP", 00:15:47.727 "adrfam": "IPv4", 00:15:47.727 "traddr": "10.0.0.3", 00:15:47.727 "trsvcid": "4420" 00:15:47.727 }, 00:15:47.727 "peer_address": { 00:15:47.727 "trtype": "TCP", 00:15:47.727 "adrfam": "IPv4", 00:15:47.727 "traddr": "10.0.0.1", 00:15:47.727 "trsvcid": "37780" 00:15:47.727 }, 00:15:47.727 "auth": { 00:15:47.727 "state": "completed", 00:15:47.727 "digest": "sha512", 00:15:47.727 "dhgroup": "ffdhe8192" 00:15:47.727 } 00:15:47.727 } 00:15:47.727 ]' 00:15:47.727 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.727 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.727 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.727 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.727 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.727 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.727 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.727 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.985 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:47.985 09:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:48.552 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.552 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:48.552 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.552 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.552 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.552 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.552 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:48.552 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.811 09:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.378 00:15:49.378 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.378 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.378 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.636 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.636 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.636 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.636 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.636 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.636 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.636 { 00:15:49.636 "cntlid": 139, 00:15:49.636 "qid": 0, 00:15:49.636 "state": "enabled", 00:15:49.636 "thread": "nvmf_tgt_poll_group_000", 00:15:49.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:49.636 "listen_address": { 00:15:49.636 "trtype": "TCP", 00:15:49.636 "adrfam": "IPv4", 00:15:49.636 "traddr": "10.0.0.3", 00:15:49.636 "trsvcid": "4420" 00:15:49.636 }, 00:15:49.636 "peer_address": { 00:15:49.636 "trtype": "TCP", 00:15:49.636 "adrfam": "IPv4", 00:15:49.636 "traddr": "10.0.0.1", 00:15:49.636 "trsvcid": "37798" 00:15:49.636 }, 00:15:49.636 "auth": { 00:15:49.636 "state": "completed", 00:15:49.636 "digest": "sha512", 00:15:49.636 "dhgroup": "ffdhe8192" 00:15:49.636 } 00:15:49.636 } 00:15:49.636 ]' 00:15:49.636 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.894 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.894 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.894 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.895 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.895 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.895 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.895 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.153 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:50.153 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: --dhchap-ctrl-secret DHHC-1:02:OGZhYmI2MTgxZjI3NGM5MGYzYmFjYjhkODlkYTQyMzE5ZTFjYjMxZTI4NmVmNmI3cTMerQ==: 00:15:51.088 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.088 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:51.088 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.088 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.088 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.088 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.088 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:51.088 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.346 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.911 00:15:51.911 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.911 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.911 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.169 { 00:15:52.169 "cntlid": 141, 00:15:52.169 "qid": 0, 00:15:52.169 "state": "enabled", 00:15:52.169 "thread": "nvmf_tgt_poll_group_000", 00:15:52.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:52.169 "listen_address": { 00:15:52.169 "trtype": "TCP", 00:15:52.169 "adrfam": "IPv4", 00:15:52.169 "traddr": "10.0.0.3", 00:15:52.169 "trsvcid": "4420" 00:15:52.169 }, 00:15:52.169 "peer_address": { 00:15:52.169 "trtype": "TCP", 00:15:52.169 "adrfam": "IPv4", 00:15:52.169 "traddr": "10.0.0.1", 00:15:52.169 "trsvcid": "51894" 00:15:52.169 }, 00:15:52.169 "auth": { 00:15:52.169 "state": "completed", 00:15:52.169 "digest": "sha512", 00:15:52.169 "dhgroup": "ffdhe8192" 00:15:52.169 } 00:15:52.169 } 00:15:52.169 ]' 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:52.169 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.169 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.169 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.169 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.428 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:52.428 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:01:MDAzN2M4MDczNDY5NzAyNWQzZDcyMjQxNGY3MmY4OTE6+IDv: 00:15:53.363 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.363 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:53.363 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.363 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.363 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.363 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.363 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:53.363 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.624 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.191 00:15:54.191 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.191 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.191 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.449 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.449 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.449 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.449 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.449 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.450 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.450 { 00:15:54.450 "cntlid": 143, 00:15:54.450 "qid": 0, 00:15:54.450 "state": "enabled", 00:15:54.450 "thread": "nvmf_tgt_poll_group_000", 00:15:54.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:54.450 "listen_address": { 00:15:54.450 "trtype": "TCP", 00:15:54.450 "adrfam": "IPv4", 00:15:54.450 "traddr": "10.0.0.3", 00:15:54.450 "trsvcid": "4420" 00:15:54.450 }, 00:15:54.450 "peer_address": { 00:15:54.450 "trtype": "TCP", 00:15:54.450 "adrfam": "IPv4", 00:15:54.450 "traddr": "10.0.0.1", 00:15:54.450 "trsvcid": "51924" 00:15:54.450 }, 00:15:54.450 "auth": { 00:15:54.450 "state": "completed", 00:15:54.450 "digest": "sha512", 00:15:54.450 "dhgroup": "ffdhe8192" 00:15:54.450 } 00:15:54.450 } 00:15:54.450 ]' 00:15:54.450 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.450 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.450 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.450 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:54.450 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.708 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.708 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.708 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.967 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:54.967 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:15:55.533 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.534 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:55.534 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.534 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.534 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.534 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:55.534 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:55.534 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:55.534 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:55.534 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:55.534 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.100 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.666 00:15:56.666 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.666 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.666 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.925 { 00:15:56.925 "cntlid": 145, 00:15:56.925 "qid": 0, 00:15:56.925 "state": "enabled", 00:15:56.925 "thread": "nvmf_tgt_poll_group_000", 00:15:56.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:56.925 "listen_address": { 00:15:56.925 "trtype": "TCP", 00:15:56.925 "adrfam": "IPv4", 00:15:56.925 "traddr": "10.0.0.3", 00:15:56.925 "trsvcid": "4420" 00:15:56.925 }, 00:15:56.925 "peer_address": { 00:15:56.925 "trtype": "TCP", 00:15:56.925 "adrfam": "IPv4", 00:15:56.925 "traddr": "10.0.0.1", 00:15:56.925 "trsvcid": "51948" 00:15:56.925 }, 00:15:56.925 "auth": { 00:15:56.925 "state": "completed", 00:15:56.925 "digest": "sha512", 00:15:56.925 "dhgroup": "ffdhe8192" 00:15:56.925 } 00:15:56.925 } 00:15:56.925 ]' 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.925 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.183 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.183 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.183 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.441 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:57.441 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:00:ODFjNTA1YjZiMTU1YTYyYjY3OTA3YWNjZjY0ZTA5OWFkMDliNTk2MDE3YzM5MTMzG+zusA==: --dhchap-ctrl-secret DHHC-1:03:OWQ4MTQ2ZmQzYmUwNmM1OWNmOTJlODQ1ODU1OWUzM2E5MTIxMTY0ODE5MzllMThhMzlmYmNhZTZiNmJlZjQ2MKwog+s=: 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:58.008 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:58.575 request: 00:15:58.575 { 00:15:58.575 "name": "nvme0", 00:15:58.575 "trtype": "tcp", 00:15:58.575 "traddr": "10.0.0.3", 00:15:58.575 "adrfam": "ipv4", 00:15:58.575 "trsvcid": "4420", 00:15:58.575 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:58.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:58.575 "prchk_reftag": false, 00:15:58.575 "prchk_guard": false, 00:15:58.575 "hdgst": false, 00:15:58.575 "ddgst": false, 00:15:58.575 "dhchap_key": "key2", 00:15:58.575 "allow_unrecognized_csi": false, 00:15:58.575 "method": "bdev_nvme_attach_controller", 00:15:58.575 "req_id": 1 00:15:58.575 } 00:15:58.575 Got JSON-RPC error response 00:15:58.575 response: 00:15:58.575 { 00:15:58.575 "code": -5, 00:15:58.575 "message": "Input/output error" 00:15:58.575 } 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:58.575 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:59.143 request: 00:15:59.143 { 00:15:59.143 "name": "nvme0", 00:15:59.143 "trtype": "tcp", 00:15:59.143 "traddr": "10.0.0.3", 00:15:59.143 "adrfam": "ipv4", 00:15:59.143 "trsvcid": "4420", 00:15:59.143 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:59.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:15:59.143 "prchk_reftag": false, 00:15:59.143 "prchk_guard": false, 00:15:59.143 "hdgst": false, 00:15:59.143 "ddgst": false, 00:15:59.143 "dhchap_key": "key1", 00:15:59.143 "dhchap_ctrlr_key": "ckey2", 00:15:59.143 "allow_unrecognized_csi": false, 00:15:59.143 "method": "bdev_nvme_attach_controller", 00:15:59.143 "req_id": 1 00:15:59.143 } 00:15:59.143 Got JSON-RPC error response 00:15:59.143 response: 00:15:59.143 { 00:15:59.143 "code": -5, 00:15:59.143 "message": "Input/output error" 00:15:59.143 } 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.143 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.079 request: 00:16:00.079 { 00:16:00.079 "name": "nvme0", 00:16:00.079 "trtype": "tcp", 00:16:00.079 "traddr": "10.0.0.3", 00:16:00.079 "adrfam": "ipv4", 00:16:00.079 "trsvcid": "4420", 00:16:00.079 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:00.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:16:00.079 "prchk_reftag": false, 00:16:00.079 "prchk_guard": false, 00:16:00.079 "hdgst": false, 00:16:00.079 "ddgst": false, 00:16:00.079 "dhchap_key": "key1", 00:16:00.079 "dhchap_ctrlr_key": "ckey1", 00:16:00.079 "allow_unrecognized_csi": false, 00:16:00.079 "method": "bdev_nvme_attach_controller", 00:16:00.079 "req_id": 1 00:16:00.079 } 00:16:00.079 Got JSON-RPC error response 00:16:00.079 response: 00:16:00.079 { 00:16:00.079 "code": -5, 00:16:00.079 "message": "Input/output error" 00:16:00.079 } 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 71660 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 71660 ']' 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 71660 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71660 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71660' 00:16:00.079 killing process with pid 71660 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 71660 00:16:00.079 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 71660 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=74710 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 74710 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 74710 ']' 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.016 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.391 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.391 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:02.391 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:02.391 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:02.391 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.391 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.392 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:02.392 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 74710 00:16:02.392 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 74710 ']' 00:16:02.392 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.392 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.392 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.392 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.392 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.650 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.650 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:02.650 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:02.650 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.650 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.909 null0 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GZL 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.bhU ]] 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bhU 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vcH 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Kq3 ]] 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kq3 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.909 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fQ5 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.s5z ]] 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s5z 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.MEw 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.910 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.288 nvme0n1 00:16:04.288 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.288 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.288 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.547 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.547 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.547 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.547 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.547 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.547 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.547 { 00:16:04.547 "cntlid": 1, 00:16:04.547 "qid": 0, 00:16:04.547 "state": "enabled", 00:16:04.547 "thread": "nvmf_tgt_poll_group_000", 00:16:04.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:16:04.547 "listen_address": { 00:16:04.547 "trtype": "TCP", 00:16:04.547 "adrfam": "IPv4", 00:16:04.547 "traddr": "10.0.0.3", 00:16:04.547 "trsvcid": "4420" 00:16:04.547 }, 00:16:04.547 "peer_address": { 00:16:04.548 "trtype": "TCP", 00:16:04.548 "adrfam": "IPv4", 00:16:04.548 "traddr": "10.0.0.1", 00:16:04.548 "trsvcid": "35050" 00:16:04.548 }, 00:16:04.548 "auth": { 00:16:04.548 "state": "completed", 00:16:04.548 "digest": "sha512", 00:16:04.548 "dhgroup": "ffdhe8192" 00:16:04.548 } 00:16:04.548 } 00:16:04.548 ]' 00:16:04.548 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.548 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.548 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.548 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:04.548 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.806 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.806 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.806 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.065 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:16:05.065 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key3 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:05.634 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:06.201 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:06.201 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:06.202 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:06.202 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:06.202 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.202 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:06.202 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.202 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.202 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.202 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.460 request: 00:16:06.460 { 00:16:06.460 "name": "nvme0", 00:16:06.460 "trtype": "tcp", 00:16:06.460 "traddr": "10.0.0.3", 00:16:06.460 "adrfam": "ipv4", 00:16:06.460 "trsvcid": "4420", 00:16:06.460 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:06.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:16:06.460 "prchk_reftag": false, 00:16:06.460 "prchk_guard": false, 00:16:06.460 "hdgst": false, 00:16:06.460 "ddgst": false, 00:16:06.460 "dhchap_key": "key3", 00:16:06.460 "allow_unrecognized_csi": false, 00:16:06.460 "method": "bdev_nvme_attach_controller", 00:16:06.460 "req_id": 1 00:16:06.460 } 00:16:06.460 Got JSON-RPC error response 00:16:06.460 response: 00:16:06.460 { 00:16:06.460 "code": -5, 00:16:06.460 "message": "Input/output error" 00:16:06.460 } 00:16:06.460 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:06.460 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:06.460 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:06.460 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:06.460 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:06.460 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:06.460 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:06.460 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:06.719 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:06.719 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:06.719 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:06.719 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:06.719 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.719 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:06.719 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.719 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.719 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.719 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.978 request: 00:16:06.978 { 00:16:06.978 "name": "nvme0", 00:16:06.978 "trtype": "tcp", 00:16:06.978 "traddr": "10.0.0.3", 00:16:06.978 "adrfam": "ipv4", 00:16:06.978 "trsvcid": "4420", 00:16:06.978 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:06.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:16:06.978 "prchk_reftag": false, 00:16:06.978 "prchk_guard": false, 00:16:06.978 "hdgst": false, 00:16:06.978 "ddgst": false, 00:16:06.978 "dhchap_key": "key3", 00:16:06.978 "allow_unrecognized_csi": false, 00:16:06.978 "method": "bdev_nvme_attach_controller", 00:16:06.978 "req_id": 1 00:16:06.978 } 00:16:06.978 Got JSON-RPC error response 00:16:06.978 response: 00:16:06.978 { 00:16:06.978 "code": -5, 00:16:06.978 "message": "Input/output error" 00:16:06.978 } 00:16:06.978 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:06.978 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:06.978 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:06.978 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:06.978 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:06.978 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:06.978 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:06.978 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:06.978 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:06.978 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:07.237 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:16:07.237 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.237 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.237 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.237 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:16:07.237 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.237 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:07.237 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:07.832 request: 00:16:07.832 { 00:16:07.832 "name": "nvme0", 00:16:07.832 "trtype": "tcp", 00:16:07.832 "traddr": "10.0.0.3", 00:16:07.832 "adrfam": "ipv4", 00:16:07.832 "trsvcid": "4420", 00:16:07.832 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:07.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:16:07.832 "prchk_reftag": false, 00:16:07.832 "prchk_guard": false, 00:16:07.832 "hdgst": false, 00:16:07.832 "ddgst": false, 00:16:07.832 "dhchap_key": "key0", 00:16:07.832 "dhchap_ctrlr_key": "key1", 00:16:07.832 "allow_unrecognized_csi": false, 00:16:07.832 "method": "bdev_nvme_attach_controller", 00:16:07.832 "req_id": 1 00:16:07.832 } 00:16:07.832 Got JSON-RPC error response 00:16:07.832 response: 00:16:07.832 { 00:16:07.832 "code": -5, 00:16:07.832 "message": "Input/output error" 00:16:07.832 } 00:16:07.832 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:07.832 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.832 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.832 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.832 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:07.832 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:07.832 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:08.092 nvme0n1 00:16:08.092 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:08.092 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.092 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:08.350 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.350 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.350 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.609 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 00:16:08.609 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.609 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.609 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.609 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:08.609 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:08.609 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:09.986 nvme0n1 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:09.986 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.553 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.554 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:16:10.554 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid 5267ba90-6d03-4c73-b69a-15b62f92a67a -l 0 --dhchap-secret DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: --dhchap-ctrl-secret DHHC-1:03:MmUzODkwNTZjNGJhNTk2OGMzMmMwNWRhMmQ4ZGFlMTlmOTQ2ZDRmZWUxNTc5M2ZmMzk5MGU1YWVhN2EwM2Q0OBE7vKQ=: 00:16:11.122 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:11.122 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:11.122 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:11.122 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:11.122 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:11.122 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:11.122 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:11.122 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.122 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.381 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:11.381 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:11.381 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:11.381 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:11.381 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.381 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:11.381 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.381 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:11.381 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:11.381 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:11.948 request: 00:16:11.948 { 00:16:11.948 "name": "nvme0", 00:16:11.948 "trtype": "tcp", 00:16:11.948 "traddr": "10.0.0.3", 00:16:11.948 "adrfam": "ipv4", 00:16:11.948 "trsvcid": "4420", 00:16:11.948 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:11.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a", 00:16:11.948 "prchk_reftag": false, 00:16:11.948 "prchk_guard": false, 00:16:11.948 "hdgst": false, 00:16:11.948 "ddgst": false, 00:16:11.948 "dhchap_key": "key1", 00:16:11.948 "allow_unrecognized_csi": false, 00:16:11.948 "method": "bdev_nvme_attach_controller", 00:16:11.948 "req_id": 1 00:16:11.948 } 00:16:11.948 Got JSON-RPC error response 00:16:11.948 response: 00:16:11.948 { 00:16:11.948 "code": -5, 00:16:11.948 "message": "Input/output error" 00:16:11.948 } 00:16:11.948 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:11.948 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.948 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.948 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.948 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:11.948 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:11.948 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:12.885 nvme0n1 00:16:13.144 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:13.144 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.144 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:13.403 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.403 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.403 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.662 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:16:13.662 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.662 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.662 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.662 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:13.662 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:13.662 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:13.921 nvme0n1 00:16:13.921 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:13.921 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:13.921 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.517 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.517 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.517 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: '' 2s 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: ]] 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZmE1YTdjOWIwMmI5ZTg1ZmIwYjIwN2RiOTQ1YjlkODRsapNS: 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:14.775 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:16.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:16.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:16.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:16.680 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: 2s 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: ]] 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZmQ5Njc5ZGY0MTQ1N2EyMTI0NGMyOTRjN2Q1Mzk4YzFmMDczM2U5OWU4ZWEzYzdii5R8rA==: 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:16.941 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:18.844 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:20.219 nvme0n1 00:16:20.219 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:20.219 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.219 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.219 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.219 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:20.219 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:20.477 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:20.477 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:20.477 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.045 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.045 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:16:21.045 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.045 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.045 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.045 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:21.045 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:21.304 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:21.304 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:21.304 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:21.562 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:22.129 request: 00:16:22.129 { 00:16:22.129 "name": "nvme0", 00:16:22.129 "dhchap_key": "key1", 00:16:22.129 "dhchap_ctrlr_key": "key3", 00:16:22.129 "method": "bdev_nvme_set_keys", 00:16:22.129 "req_id": 1 00:16:22.129 } 00:16:22.129 Got JSON-RPC error response 00:16:22.129 response: 00:16:22.129 { 00:16:22.129 "code": -13, 00:16:22.129 "message": "Permission denied" 00:16:22.129 } 00:16:22.129 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:22.129 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.129 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.129 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.129 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:22.129 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:22.129 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.388 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:22.388 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:23.324 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:23.324 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:23.324 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.581 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:23.582 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:23.582 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.582 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.582 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.582 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:23.582 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:23.582 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:24.518 nvme0n1 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:24.518 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:25.518 request: 00:16:25.518 { 00:16:25.518 "name": "nvme0", 00:16:25.518 "dhchap_key": "key2", 00:16:25.518 "dhchap_ctrlr_key": "key0", 00:16:25.518 "method": "bdev_nvme_set_keys", 00:16:25.518 "req_id": 1 00:16:25.518 } 00:16:25.518 Got JSON-RPC error response 00:16:25.518 response: 00:16:25.518 { 00:16:25.518 "code": -13, 00:16:25.518 "message": "Permission denied" 00:16:25.518 } 00:16:25.518 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:25.518 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.518 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.518 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.518 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:25.518 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:25.518 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.518 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:25.518 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:26.514 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:26.514 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.514 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 71692 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 71692 ']' 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 71692 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71692 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:27.083 killing process with pid 71692 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71692' 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 71692 00:16:27.083 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 71692 00:16:28.987 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:28.987 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:28.987 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.247 rmmod nvme_tcp 00:16:29.247 rmmod nvme_fabrics 00:16:29.247 rmmod nvme_keyring 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 74710 ']' 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 74710 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 74710 ']' 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 74710 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74710 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.247 killing process with pid 74710 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74710' 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 74710 00:16:29.247 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 74710 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:30.185 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:30.185 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:30.185 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:30.185 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:30.185 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.444 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.444 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:30.444 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.444 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.444 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.444 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:16:30.444 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.GZL /tmp/spdk.key-sha256.vcH /tmp/spdk.key-sha384.fQ5 /tmp/spdk.key-sha512.MEw /tmp/spdk.key-sha512.bhU /tmp/spdk.key-sha384.Kq3 /tmp/spdk.key-sha256.s5z '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:30.444 00:16:30.444 real 3m15.470s 00:16:30.444 user 7m45.045s 00:16:30.444 sys 0m28.679s 00:16:30.444 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.444 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.444 ************************************ 00:16:30.444 END TEST nvmf_auth_target 00:16:30.444 ************************************ 00:16:30.444 09:18:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:30.445 09:18:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:30.445 09:18:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:30.445 09:18:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.445 09:18:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.445 ************************************ 00:16:30.445 START TEST nvmf_bdevio_no_huge 00:16:30.445 ************************************ 00:16:30.445 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:30.445 * Looking for test storage... 00:16:30.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:30.445 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:30.445 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:16:30.445 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:30.704 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:30.704 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.704 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.704 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:30.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.705 --rc genhtml_branch_coverage=1 00:16:30.705 --rc genhtml_function_coverage=1 00:16:30.705 --rc genhtml_legend=1 00:16:30.705 --rc geninfo_all_blocks=1 00:16:30.705 --rc geninfo_unexecuted_blocks=1 00:16:30.705 00:16:30.705 ' 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:30.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.705 --rc genhtml_branch_coverage=1 00:16:30.705 --rc genhtml_function_coverage=1 00:16:30.705 --rc genhtml_legend=1 00:16:30.705 --rc geninfo_all_blocks=1 00:16:30.705 --rc geninfo_unexecuted_blocks=1 00:16:30.705 00:16:30.705 ' 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:30.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.705 --rc genhtml_branch_coverage=1 00:16:30.705 --rc genhtml_function_coverage=1 00:16:30.705 --rc genhtml_legend=1 00:16:30.705 --rc geninfo_all_blocks=1 00:16:30.705 --rc geninfo_unexecuted_blocks=1 00:16:30.705 00:16:30.705 ' 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:30.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.705 --rc genhtml_branch_coverage=1 00:16:30.705 --rc genhtml_function_coverage=1 00:16:30.705 --rc genhtml_legend=1 00:16:30.705 --rc geninfo_all_blocks=1 00:16:30.705 --rc geninfo_unexecuted_blocks=1 00:16:30.705 00:16:30.705 ' 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.705 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.706 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:30.706 Cannot find device "nvmf_init_br" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:30.706 Cannot find device "nvmf_init_br2" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:30.706 Cannot find device "nvmf_tgt_br" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.706 Cannot find device "nvmf_tgt_br2" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:30.706 Cannot find device "nvmf_init_br" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:30.706 Cannot find device "nvmf_init_br2" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:30.706 Cannot find device "nvmf_tgt_br" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:30.706 Cannot find device "nvmf_tgt_br2" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:30.706 Cannot find device "nvmf_br" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:30.706 Cannot find device "nvmf_init_if" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:30.706 Cannot find device "nvmf_init_if2" 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:30.706 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:30.966 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:30.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:16:30.967 00:16:30.967 --- 10.0.0.3 ping statistics --- 00:16:30.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.967 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:30.967 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:30.967 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:16:30.967 00:16:30.967 --- 10.0.0.4 ping statistics --- 00:16:30.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.967 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:30.967 00:16:30.967 --- 10.0.0.1 ping statistics --- 00:16:30.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.967 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:30.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:16:30.967 00:16:30.967 --- 10.0.0.2 ping statistics --- 00:16:30.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.967 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=75406 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 75406 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 75406 ']' 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.967 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:31.226 [2024-12-13 09:18:24.937637] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:31.226 [2024-12-13 09:18:24.937816] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:31.485 [2024-12-13 09:18:25.148214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.485 [2024-12-13 09:18:25.326587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.485 [2024-12-13 09:18:25.326657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.485 [2024-12-13 09:18:25.326679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.485 [2024-12-13 09:18:25.326695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.485 [2024-12-13 09:18:25.326709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.485 [2024-12-13 09:18:25.328649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:16:31.485 [2024-12-13 09:18:25.328803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:16:31.485 [2024-12-13 09:18:25.328867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:16:31.485 [2024-12-13 09:18:25.328956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.744 [2024-12-13 09:18:25.491715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:32.314 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.314 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:16:32.314 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:32.314 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:32.314 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:32.314 [2024-12-13 09:18:26.008403] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:32.314 Malloc0 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:32.314 [2024-12-13 09:18:26.103042] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.314 { 00:16:32.314 "params": { 00:16:32.314 "name": "Nvme$subsystem", 00:16:32.314 "trtype": "$TEST_TRANSPORT", 00:16:32.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.314 "adrfam": "ipv4", 00:16:32.314 "trsvcid": "$NVMF_PORT", 00:16:32.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.314 "hdgst": ${hdgst:-false}, 00:16:32.314 "ddgst": ${ddgst:-false} 00:16:32.314 }, 00:16:32.314 "method": "bdev_nvme_attach_controller" 00:16:32.314 } 00:16:32.314 EOF 00:16:32.314 )") 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:16:32.314 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:32.314 "params": { 00:16:32.314 "name": "Nvme1", 00:16:32.314 "trtype": "tcp", 00:16:32.314 "traddr": "10.0.0.3", 00:16:32.314 "adrfam": "ipv4", 00:16:32.314 "trsvcid": "4420", 00:16:32.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:32.314 "hdgst": false, 00:16:32.314 "ddgst": false 00:16:32.314 }, 00:16:32.314 "method": "bdev_nvme_attach_controller" 00:16:32.314 }' 00:16:32.574 [2024-12-13 09:18:26.216769] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:32.574 [2024-12-13 09:18:26.216930] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75442 ] 00:16:32.574 [2024-12-13 09:18:26.431881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:32.833 [2024-12-13 09:18:26.596123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.833 [2024-12-13 09:18:26.596213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.833 [2024-12-13 09:18:26.596300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.093 [2024-12-13 09:18:26.759618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:33.353 I/O targets: 00:16:33.353 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:33.353 00:16:33.353 00:16:33.353 CUnit - A unit testing framework for C - Version 2.1-3 00:16:33.353 http://cunit.sourceforge.net/ 00:16:33.353 00:16:33.353 00:16:33.353 Suite: bdevio tests on: Nvme1n1 00:16:33.353 Test: blockdev write read block ...passed 00:16:33.353 Test: blockdev write zeroes read block ...passed 00:16:33.353 Test: blockdev write zeroes read no split ...passed 00:16:33.353 Test: blockdev write zeroes read split ...passed 00:16:33.353 Test: blockdev write zeroes read split partial ...passed 00:16:33.353 Test: blockdev reset ...[2024-12-13 09:18:27.109780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:33.353 [2024-12-13 09:18:27.109961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:16:33.353 [2024-12-13 09:18:27.130388] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:16:33.353 passed 00:16:33.353 Test: blockdev write read 8 blocks ...passed 00:16:33.353 Test: blockdev write read size > 128k ...passed 00:16:33.353 Test: blockdev write read invalid size ...passed 00:16:33.353 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:33.353 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:33.353 Test: blockdev write read max offset ...passed 00:16:33.353 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:33.353 Test: blockdev writev readv 8 blocks ...passed 00:16:33.353 Test: blockdev writev readv 30 x 1block ...passed 00:16:33.354 Test: blockdev writev readv block ...passed 00:16:33.354 Test: blockdev writev readv size > 128k ...passed 00:16:33.354 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:33.354 Test: blockdev comparev and writev ...[2024-12-13 09:18:27.143646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.354 [2024-12-13 09:18:27.143832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.354 [2024-12-13 09:18:27.143874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.354 [2024-12-13 09:18:27.143896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:33.354 [2024-12-13 09:18:27.144330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.354 [2024-12-13 09:18:27.144363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:33.354 [2024-12-13 09:18:27.144389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.354 [2024-12-13 09:18:27.144408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:33.354 [2024-12-13 09:18:27.144964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.354 [2024-12-13 09:18:27.145011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:33.354 [2024-12-13 09:18:27.145039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.354 [2024-12-13 09:18:27.145063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:33.354 [2024-12-13 09:18:27.145581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.354 [2024-12-13 09:18:27.145627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:33.354 [2024-12-13 09:18:27.145654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:33.354 [2024-12-13 09:18:27.145674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:33.354 passed 00:16:33.354 Test: blockdev nvme passthru rw ...passed 00:16:33.354 Test: blockdev nvme passthru vendor specific ...[2024-12-13 09:18:27.146823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.354 [2024-12-13 09:18:27.146874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:33.354 [2024-12-13 09:18:27.147030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.354 [2024-12-13 09:18:27.147066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:33.354 [2024-12-13 09:18:27.147214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.354 [2024-12-13 09:18:27.147253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:33.354 [2024-12-13 09:18:27.147421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:33.354 [2024-12-13 09:18:27.147460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:33.354 passed 00:16:33.354 Test: blockdev nvme admin passthru ...passed 00:16:33.354 Test: blockdev copy ...passed 00:16:33.354 00:16:33.354 Run Summary: Type Total Ran Passed Failed Inactive 00:16:33.354 suites 1 1 n/a 0 0 00:16:33.354 tests 23 23 23 0 0 00:16:33.354 asserts 152 152 152 0 n/a 00:16:33.354 00:16:33.354 Elapsed time = 0.260 seconds 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:34.292 rmmod nvme_tcp 00:16:34.292 rmmod nvme_fabrics 00:16:34.292 rmmod nvme_keyring 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 75406 ']' 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 75406 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 75406 ']' 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 75406 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75406 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:16:34.292 killing process with pid 75406 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75406' 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 75406 00:16:34.292 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 75406 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.230 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.230 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:16:35.230 00:16:35.230 real 0m4.797s 00:16:35.230 user 0m16.455s 00:16:35.230 sys 0m1.557s 00:16:35.230 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.230 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:35.230 ************************************ 00:16:35.230 END TEST nvmf_bdevio_no_huge 00:16:35.230 ************************************ 00:16:35.230 09:18:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:35.230 09:18:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:35.230 09:18:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.230 09:18:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.230 ************************************ 00:16:35.230 START TEST nvmf_tls 00:16:35.230 ************************************ 00:16:35.230 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:35.491 * Looking for test storage... 00:16:35.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.491 --rc genhtml_branch_coverage=1 00:16:35.491 --rc genhtml_function_coverage=1 00:16:35.491 --rc genhtml_legend=1 00:16:35.491 --rc geninfo_all_blocks=1 00:16:35.491 --rc geninfo_unexecuted_blocks=1 00:16:35.491 00:16:35.491 ' 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.491 --rc genhtml_branch_coverage=1 00:16:35.491 --rc genhtml_function_coverage=1 00:16:35.491 --rc genhtml_legend=1 00:16:35.491 --rc geninfo_all_blocks=1 00:16:35.491 --rc geninfo_unexecuted_blocks=1 00:16:35.491 00:16:35.491 ' 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.491 --rc genhtml_branch_coverage=1 00:16:35.491 --rc genhtml_function_coverage=1 00:16:35.491 --rc genhtml_legend=1 00:16:35.491 --rc geninfo_all_blocks=1 00:16:35.491 --rc geninfo_unexecuted_blocks=1 00:16:35.491 00:16:35.491 ' 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.491 --rc genhtml_branch_coverage=1 00:16:35.491 --rc genhtml_function_coverage=1 00:16:35.491 --rc genhtml_legend=1 00:16:35.491 --rc geninfo_all_blocks=1 00:16:35.491 --rc geninfo_unexecuted_blocks=1 00:16:35.491 00:16:35.491 ' 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.491 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.492 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:35.492 Cannot find device "nvmf_init_br" 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:35.492 Cannot find device "nvmf_init_br2" 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:35.492 Cannot find device "nvmf_tgt_br" 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.492 Cannot find device "nvmf_tgt_br2" 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:35.492 Cannot find device "nvmf_init_br" 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:35.492 Cannot find device "nvmf_init_br2" 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:35.492 Cannot find device "nvmf_tgt_br" 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:35.492 Cannot find device "nvmf_tgt_br2" 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:35.492 Cannot find device "nvmf_br" 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:16:35.492 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:35.752 Cannot find device "nvmf_init_if" 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:35.752 Cannot find device "nvmf_init_if2" 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:35.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:16:35.752 00:16:35.752 --- 10.0.0.3 ping statistics --- 00:16:35.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.752 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:35.752 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:35.752 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:16:35.752 00:16:35.752 --- 10.0.0.4 ping statistics --- 00:16:35.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.752 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:35.752 00:16:35.752 --- 10.0.0.1 ping statistics --- 00:16:35.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.752 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:35.752 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:35.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:35.752 00:16:35.752 --- 10.0.0.2 ping statistics --- 00:16:35.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.753 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:35.753 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.753 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:16:35.753 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:35.753 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.753 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:35.753 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:35.753 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.753 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:35.753 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75714 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75714 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75714 ']' 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.012 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.012 [2024-12-13 09:18:29.779415] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:36.012 [2024-12-13 09:18:29.780257] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.272 [2024-12-13 09:18:29.969574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.272 [2024-12-13 09:18:30.096046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.272 [2024-12-13 09:18:30.096123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.272 [2024-12-13 09:18:30.096146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.272 [2024-12-13 09:18:30.096175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.272 [2024-12-13 09:18:30.096192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.272 [2024-12-13 09:18:30.097639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.224 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.224 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:37.224 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:37.224 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:37.224 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:37.224 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.224 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:16:37.224 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:37.224 true 00:16:37.224 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:37.224 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:16:37.792 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:16:37.792 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:16:37.792 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:37.792 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:37.792 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:16:38.052 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:16:38.052 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:16:38.052 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:38.311 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:38.311 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:16:38.570 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:16:38.570 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:16:38.570 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:16:38.570 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:38.829 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:38.829 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:38.829 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:39.089 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:39.089 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:39.348 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:16:39.348 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:16:39.348 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:39.607 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:39.607 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:39.867 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:40.126 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:40.126 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:40.126 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.TL7MsODgsU 00:16:40.126 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:16:40.126 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.kknbKTOL7D 00:16:40.126 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:40.126 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:40.126 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TL7MsODgsU 00:16:40.126 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.kknbKTOL7D 00:16:40.126 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:40.385 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:40.644 [2024-12-13 09:18:34.459141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.904 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.TL7MsODgsU 00:16:40.904 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TL7MsODgsU 00:16:40.904 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:41.163 [2024-12-13 09:18:34.801182] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.163 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:41.163 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:41.431 [2024-12-13 09:18:35.269371] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:41.431 [2024-12-13 09:18:35.269776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:41.431 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:41.693 malloc0 00:16:41.693 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:41.951 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TL7MsODgsU 00:16:42.210 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:42.470 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TL7MsODgsU 00:16:54.851 Initializing NVMe Controllers 00:16:54.851 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:54.851 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:54.851 Initialization complete. Launching workers. 00:16:54.851 ======================================================== 00:16:54.851 Latency(us) 00:16:54.851 Device Information : IOPS MiB/s Average min max 00:16:54.851 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6701.10 26.18 9553.95 2567.32 26810.30 00:16:54.851 ======================================================== 00:16:54.851 Total : 6701.10 26.18 9553.95 2567.32 26810.30 00:16:54.851 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TL7MsODgsU 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TL7MsODgsU 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75963 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75963 /var/tmp/bdevperf.sock 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75963 ']' 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.851 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.851 [2024-12-13 09:18:46.757729] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:54.851 [2024-12-13 09:18:46.757891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75963 ] 00:16:54.851 [2024-12-13 09:18:46.945328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.851 [2024-12-13 09:18:47.071611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.851 [2024-12-13 09:18:47.289722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:54.851 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.851 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:54.851 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TL7MsODgsU 00:16:54.851 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:54.851 [2024-12-13 09:18:48.272012] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.851 TLSTESTn1 00:16:54.851 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:54.851 Running I/O for 10 seconds... 00:16:56.724 2816.00 IOPS, 11.00 MiB/s [2024-12-13T09:18:51.551Z] 2878.50 IOPS, 11.24 MiB/s [2024-12-13T09:18:52.488Z] 2901.33 IOPS, 11.33 MiB/s [2024-12-13T09:18:53.866Z] 2898.00 IOPS, 11.32 MiB/s [2024-12-13T09:18:54.804Z] 2918.40 IOPS, 11.40 MiB/s [2024-12-13T09:18:55.740Z] 2922.67 IOPS, 11.42 MiB/s [2024-12-13T09:18:56.678Z] 2939.29 IOPS, 11.48 MiB/s [2024-12-13T09:18:57.615Z] 2944.00 IOPS, 11.50 MiB/s [2024-12-13T09:18:58.603Z] 2948.33 IOPS, 11.52 MiB/s [2024-12-13T09:18:58.603Z] 2960.20 IOPS, 11.56 MiB/s 00:17:04.713 Latency(us) 00:17:04.713 [2024-12-13T09:18:58.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.713 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:04.713 Verification LBA range: start 0x0 length 0x2000 00:17:04.713 TLSTESTn1 : 10.03 2964.38 11.58 0.00 0.00 43071.66 5362.04 28835.84 00:17:04.713 [2024-12-13T09:18:58.603Z] =================================================================================================================== 00:17:04.713 [2024-12-13T09:18:58.603Z] Total : 2964.38 11.58 0.00 0.00 43071.66 5362.04 28835.84 00:17:04.713 { 00:17:04.713 "results": [ 00:17:04.713 { 00:17:04.713 "job": "TLSTESTn1", 00:17:04.713 "core_mask": "0x4", 00:17:04.713 "workload": "verify", 00:17:04.713 "status": "finished", 00:17:04.713 "verify_range": { 00:17:04.713 "start": 0, 00:17:04.713 "length": 8192 00:17:04.713 }, 00:17:04.713 "queue_depth": 128, 00:17:04.713 "io_size": 4096, 00:17:04.713 "runtime": 10.028077, 00:17:04.713 "iops": 2964.376918924735, 00:17:04.713 "mibps": 11.579597339549746, 00:17:04.713 "io_failed": 0, 00:17:04.713 "io_timeout": 0, 00:17:04.713 "avg_latency_us": 43071.66094172118, 00:17:04.713 "min_latency_us": 5362.036363636364, 00:17:04.713 "max_latency_us": 28835.84 00:17:04.713 } 00:17:04.713 ], 00:17:04.713 "core_count": 1 00:17:04.714 } 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 75963 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75963 ']' 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75963 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75963 00:17:04.714 killing process with pid 75963 00:17:04.714 Received shutdown signal, test time was about 10.000000 seconds 00:17:04.714 00:17:04.714 Latency(us) 00:17:04.714 [2024-12-13T09:18:58.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.714 [2024-12-13T09:18:58.604Z] =================================================================================================================== 00:17:04.714 [2024-12-13T09:18:58.604Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75963' 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75963 00:17:04.714 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75963 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kknbKTOL7D 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kknbKTOL7D 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kknbKTOL7D 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kknbKTOL7D 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:06.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76114 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76114 /var/tmp/bdevperf.sock 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76114 ']' 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.116 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.116 [2024-12-13 09:18:59.686793] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:06.117 [2024-12-13 09:18:59.686962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76114 ] 00:17:06.117 [2024-12-13 09:18:59.873683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.117 [2024-12-13 09:19:00.004377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.376 [2024-12-13 09:19:00.212429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:06.944 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.944 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:06.944 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kknbKTOL7D 00:17:07.202 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:07.461 [2024-12-13 09:19:01.249855] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:07.461 [2024-12-13 09:19:01.258890] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:07.461 [2024-12-13 09:19:01.258905] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:07.461 [2024-12-13 09:19:01.259850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:07.461 [2024-12-13 09:19:01.260847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:07.461 [2024-12-13 09:19:01.260900] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:07.461 [2024-12-13 09:19:01.260936] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:07.461 [2024-12-13 09:19:01.260951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:07.461 request: 00:17:07.461 { 00:17:07.461 "name": "TLSTEST", 00:17:07.461 "trtype": "tcp", 00:17:07.461 "traddr": "10.0.0.3", 00:17:07.461 "adrfam": "ipv4", 00:17:07.461 "trsvcid": "4420", 00:17:07.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.461 "prchk_reftag": false, 00:17:07.461 "prchk_guard": false, 00:17:07.461 "hdgst": false, 00:17:07.461 "ddgst": false, 00:17:07.461 "psk": "key0", 00:17:07.461 "allow_unrecognized_csi": false, 00:17:07.461 "method": "bdev_nvme_attach_controller", 00:17:07.461 "req_id": 1 00:17:07.461 } 00:17:07.461 Got JSON-RPC error response 00:17:07.461 response: 00:17:07.461 { 00:17:07.461 "code": -5, 00:17:07.461 "message": "Input/output error" 00:17:07.461 } 00:17:07.461 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 76114 00:17:07.461 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76114 ']' 00:17:07.461 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76114 00:17:07.462 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:07.462 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.462 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76114 00:17:07.462 killing process with pid 76114 00:17:07.462 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.462 00:17:07.462 Latency(us) 00:17:07.462 [2024-12-13T09:19:01.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.462 [2024-12-13T09:19:01.352Z] =================================================================================================================== 00:17:07.462 [2024-12-13T09:19:01.352Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:07.462 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:07.462 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:07.462 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76114' 00:17:07.462 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76114 00:17:07.462 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76114 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TL7MsODgsU 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TL7MsODgsU 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TL7MsODgsU 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TL7MsODgsU 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76149 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76149 /var/tmp/bdevperf.sock 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76149 ']' 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.399 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.399 [2024-12-13 09:19:02.285528] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:08.399 [2024-12-13 09:19:02.285717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76149 ] 00:17:08.658 [2024-12-13 09:19:02.470112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.917 [2024-12-13 09:19:02.578061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.917 [2024-12-13 09:19:02.757319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:09.483 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.483 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:09.483 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TL7MsODgsU 00:17:09.743 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:10.002 [2024-12-13 09:19:03.792906] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:10.002 [2024-12-13 09:19:03.805207] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:10.002 [2024-12-13 09:19:03.805258] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:10.002 [2024-12-13 09:19:03.805396] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:10.002 [2024-12-13 09:19:03.805755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:10.002 [2024-12-13 09:19:03.806712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:10.002 [2024-12-13 09:19:03.807709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:10.002 [2024-12-13 09:19:03.807748] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:10.002 [2024-12-13 09:19:03.807784] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:10.002 [2024-12-13 09:19:03.807799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:10.002 request: 00:17:10.002 { 00:17:10.002 "name": "TLSTEST", 00:17:10.002 "trtype": "tcp", 00:17:10.002 "traddr": "10.0.0.3", 00:17:10.002 "adrfam": "ipv4", 00:17:10.002 "trsvcid": "4420", 00:17:10.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.002 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:10.002 "prchk_reftag": false, 00:17:10.002 "prchk_guard": false, 00:17:10.002 "hdgst": false, 00:17:10.002 "ddgst": false, 00:17:10.002 "psk": "key0", 00:17:10.002 "allow_unrecognized_csi": false, 00:17:10.002 "method": "bdev_nvme_attach_controller", 00:17:10.002 "req_id": 1 00:17:10.002 } 00:17:10.002 Got JSON-RPC error response 00:17:10.002 response: 00:17:10.002 { 00:17:10.002 "code": -5, 00:17:10.002 "message": "Input/output error" 00:17:10.002 } 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 76149 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76149 ']' 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76149 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76149 00:17:10.002 killing process with pid 76149 00:17:10.002 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.002 00:17:10.002 Latency(us) 00:17:10.002 [2024-12-13T09:19:03.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.002 [2024-12-13T09:19:03.892Z] =================================================================================================================== 00:17:10.002 [2024-12-13T09:19:03.892Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76149' 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76149 00:17:10.002 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76149 00:17:10.938 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:10.938 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:10.938 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.938 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.938 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.938 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TL7MsODgsU 00:17:10.938 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:10.938 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TL7MsODgsU 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TL7MsODgsU 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TL7MsODgsU 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76190 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76190 /var/tmp/bdevperf.sock 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76190 ']' 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.939 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.198 [2024-12-13 09:19:04.924461] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:11.198 [2024-12-13 09:19:04.924640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76190 ] 00:17:11.456 [2024-12-13 09:19:05.106181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.456 [2024-12-13 09:19:05.204830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.714 [2024-12-13 09:19:05.375062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:11.972 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.972 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:11.972 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TL7MsODgsU 00:17:12.231 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:12.490 [2024-12-13 09:19:06.315133] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.490 [2024-12-13 09:19:06.326009] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:12.490 [2024-12-13 09:19:06.326057] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:12.490 [2024-12-13 09:19:06.326135] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:12.490 [2024-12-13 09:19:06.326789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:12.490 [2024-12-13 09:19:06.327749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:12.490 [2024-12-13 09:19:06.328746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:12.490 [2024-12-13 09:19:06.328787] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:12.490 [2024-12-13 09:19:06.328808] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:12.490 [2024-12-13 09:19:06.328825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:12.490 request: 00:17:12.490 { 00:17:12.490 "name": "TLSTEST", 00:17:12.490 "trtype": "tcp", 00:17:12.490 "traddr": "10.0.0.3", 00:17:12.490 "adrfam": "ipv4", 00:17:12.490 "trsvcid": "4420", 00:17:12.490 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:12.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:12.490 "prchk_reftag": false, 00:17:12.490 "prchk_guard": false, 00:17:12.490 "hdgst": false, 00:17:12.490 "ddgst": false, 00:17:12.490 "psk": "key0", 00:17:12.490 "allow_unrecognized_csi": false, 00:17:12.490 "method": "bdev_nvme_attach_controller", 00:17:12.490 "req_id": 1 00:17:12.490 } 00:17:12.490 Got JSON-RPC error response 00:17:12.490 response: 00:17:12.490 { 00:17:12.490 "code": -5, 00:17:12.490 "message": "Input/output error" 00:17:12.490 } 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 76190 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76190 ']' 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76190 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76190 00:17:12.490 killing process with pid 76190 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76190' 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76190 00:17:12.490 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.490 00:17:12.490 Latency(us) 00:17:12.490 [2024-12-13T09:19:06.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.490 [2024-12-13T09:19:06.380Z] =================================================================================================================== 00:17:12.490 [2024-12-13T09:19:06.380Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:12.490 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76190 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76225 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76225 /var/tmp/bdevperf.sock 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76225 ']' 00:17:13.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.426 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.426 [2024-12-13 09:19:07.293715] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:13.426 [2024-12-13 09:19:07.293902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76225 ] 00:17:13.685 [2024-12-13 09:19:07.478272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.943 [2024-12-13 09:19:07.579620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.943 [2024-12-13 09:19:07.743586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:14.511 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.511 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:14.511 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:14.770 [2024-12-13 09:19:08.472319] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:14.770 [2024-12-13 09:19:08.472390] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:14.770 request: 00:17:14.770 { 00:17:14.770 "name": "key0", 00:17:14.770 "path": "", 00:17:14.770 "method": "keyring_file_add_key", 00:17:14.770 "req_id": 1 00:17:14.770 } 00:17:14.770 Got JSON-RPC error response 00:17:14.770 response: 00:17:14.770 { 00:17:14.770 "code": -1, 00:17:14.770 "message": "Operation not permitted" 00:17:14.770 } 00:17:14.770 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:15.029 [2024-12-13 09:19:08.720641] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:15.029 [2024-12-13 09:19:08.720727] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:15.029 request: 00:17:15.029 { 00:17:15.029 "name": "TLSTEST", 00:17:15.029 "trtype": "tcp", 00:17:15.029 "traddr": "10.0.0.3", 00:17:15.029 "adrfam": "ipv4", 00:17:15.029 "trsvcid": "4420", 00:17:15.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.029 "prchk_reftag": false, 00:17:15.029 "prchk_guard": false, 00:17:15.029 "hdgst": false, 00:17:15.029 "ddgst": false, 00:17:15.029 "psk": "key0", 00:17:15.029 "allow_unrecognized_csi": false, 00:17:15.029 "method": "bdev_nvme_attach_controller", 00:17:15.029 "req_id": 1 00:17:15.029 } 00:17:15.029 Got JSON-RPC error response 00:17:15.029 response: 00:17:15.029 { 00:17:15.029 "code": -126, 00:17:15.029 "message": "Required key not available" 00:17:15.029 } 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 76225 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76225 ']' 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76225 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76225 00:17:15.029 killing process with pid 76225 00:17:15.029 Received shutdown signal, test time was about 10.000000 seconds 00:17:15.029 00:17:15.029 Latency(us) 00:17:15.029 [2024-12-13T09:19:08.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.029 [2024-12-13T09:19:08.919Z] =================================================================================================================== 00:17:15.029 [2024-12-13T09:19:08.919Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76225' 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76225 00:17:15.029 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76225 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 75714 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75714 ']' 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75714 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75714 00:17:15.967 killing process with pid 75714 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75714' 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75714 00:17:15.967 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75714 00:17:16.904 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:16.904 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:16.904 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.904 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:16.904 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:16.905 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:17:16.905 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.EcOWTTjkkG 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.EcOWTTjkkG 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:17.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76288 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76288 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76288 ']' 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.164 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:17.164 [2024-12-13 09:19:10.953354] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:17.164 [2024-12-13 09:19:10.953535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.423 [2024-12-13 09:19:11.136106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.423 [2024-12-13 09:19:11.229189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.423 [2024-12-13 09:19:11.229252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.423 [2024-12-13 09:19:11.229287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.423 [2024-12-13 09:19:11.229325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.423 [2024-12-13 09:19:11.229339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.423 [2024-12-13 09:19:11.230460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.683 [2024-12-13 09:19:11.388899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:18.282 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.282 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:18.282 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.282 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.282 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.282 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.282 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.EcOWTTjkkG 00:17:18.282 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EcOWTTjkkG 00:17:18.282 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:18.541 [2024-12-13 09:19:12.217047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.541 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:18.800 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:19.058 [2024-12-13 09:19:12.781276] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:19.058 [2024-12-13 09:19:12.781680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:19.058 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:19.318 malloc0 00:17:19.318 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:19.576 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EcOWTTjkkG 00:17:19.836 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EcOWTTjkkG 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EcOWTTjkkG 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76350 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76350 /var/tmp/bdevperf.sock 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76350 ']' 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.096 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.096 [2024-12-13 09:19:13.914269] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:20.096 [2024-12-13 09:19:13.914455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76350 ] 00:17:20.355 [2024-12-13 09:19:14.094597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.355 [2024-12-13 09:19:14.223650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.615 [2024-12-13 09:19:14.405615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:21.184 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.184 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:21.184 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EcOWTTjkkG 00:17:21.442 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:21.700 [2024-12-13 09:19:15.399515] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.700 TLSTESTn1 00:17:21.700 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:21.959 Running I/O for 10 seconds... 00:17:23.836 2807.00 IOPS, 10.96 MiB/s [2024-12-13T09:19:18.664Z] 2765.50 IOPS, 10.80 MiB/s [2024-12-13T09:19:20.046Z] 2773.33 IOPS, 10.83 MiB/s [2024-12-13T09:19:20.984Z] 2781.25 IOPS, 10.86 MiB/s [2024-12-13T09:19:21.921Z] 2778.80 IOPS, 10.85 MiB/s [2024-12-13T09:19:22.859Z] 2785.83 IOPS, 10.88 MiB/s [2024-12-13T09:19:23.799Z] 2803.29 IOPS, 10.95 MiB/s [2024-12-13T09:19:24.736Z] 2814.75 IOPS, 11.00 MiB/s [2024-12-13T09:19:25.682Z] 2824.11 IOPS, 11.03 MiB/s [2024-12-13T09:19:25.682Z] 2831.40 IOPS, 11.06 MiB/s 00:17:31.792 Latency(us) 00:17:31.792 [2024-12-13T09:19:25.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.792 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:31.792 Verification LBA range: start 0x0 length 0x2000 00:17:31.792 TLSTESTn1 : 10.02 2836.94 11.08 0.00 0.00 45027.25 9175.04 48615.80 00:17:31.792 [2024-12-13T09:19:25.682Z] =================================================================================================================== 00:17:31.792 [2024-12-13T09:19:25.682Z] Total : 2836.94 11.08 0.00 0.00 45027.25 9175.04 48615.80 00:17:31.792 { 00:17:31.792 "results": [ 00:17:31.792 { 00:17:31.792 "job": "TLSTESTn1", 00:17:31.792 "core_mask": "0x4", 00:17:31.792 "workload": "verify", 00:17:31.792 "status": "finished", 00:17:31.792 "verify_range": { 00:17:31.792 "start": 0, 00:17:31.792 "length": 8192 00:17:31.792 }, 00:17:31.792 "queue_depth": 128, 00:17:31.792 "io_size": 4096, 00:17:31.792 "runtime": 10.024517, 00:17:31.792 "iops": 2836.944662770286, 00:17:31.792 "mibps": 11.08181508894643, 00:17:31.792 "io_failed": 0, 00:17:31.792 "io_timeout": 0, 00:17:31.792 "avg_latency_us": 45027.247383842296, 00:17:31.792 "min_latency_us": 9175.04, 00:17:31.792 "max_latency_us": 48615.796363636364 00:17:31.792 } 00:17:31.792 ], 00:17:31.792 "core_count": 1 00:17:31.792 } 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 76350 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76350 ']' 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76350 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76350 00:17:32.051 killing process with pid 76350 00:17:32.051 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.051 00:17:32.051 Latency(us) 00:17:32.051 [2024-12-13T09:19:25.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.051 [2024-12-13T09:19:25.941Z] =================================================================================================================== 00:17:32.051 [2024-12-13T09:19:25.941Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76350' 00:17:32.051 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76350 00:17:32.052 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76350 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.EcOWTTjkkG 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EcOWTTjkkG 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EcOWTTjkkG 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EcOWTTjkkG 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EcOWTTjkkG 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76494 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76494 /var/tmp/bdevperf.sock 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76494 ']' 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.989 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.989 [2024-12-13 09:19:26.755311] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:32.989 [2024-12-13 09:19:26.755505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76494 ] 00:17:33.248 [2024-12-13 09:19:26.941420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.248 [2024-12-13 09:19:27.044138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.507 [2024-12-13 09:19:27.215887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:34.075 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.075 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:34.075 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EcOWTTjkkG 00:17:34.335 [2024-12-13 09:19:27.994218] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EcOWTTjkkG': 0100666 00:17:34.335 [2024-12-13 09:19:27.994319] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:34.335 request: 00:17:34.335 { 00:17:34.335 "name": "key0", 00:17:34.335 "path": "/tmp/tmp.EcOWTTjkkG", 00:17:34.335 "method": "keyring_file_add_key", 00:17:34.335 "req_id": 1 00:17:34.335 } 00:17:34.335 Got JSON-RPC error response 00:17:34.335 response: 00:17:34.335 { 00:17:34.335 "code": -1, 00:17:34.335 "message": "Operation not permitted" 00:17:34.335 } 00:17:34.335 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:34.594 [2024-12-13 09:19:28.246450] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.594 [2024-12-13 09:19:28.246547] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:34.594 request: 00:17:34.594 { 00:17:34.594 "name": "TLSTEST", 00:17:34.594 "trtype": "tcp", 00:17:34.594 "traddr": "10.0.0.3", 00:17:34.594 "adrfam": "ipv4", 00:17:34.594 "trsvcid": "4420", 00:17:34.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.594 "prchk_reftag": false, 00:17:34.594 "prchk_guard": false, 00:17:34.594 "hdgst": false, 00:17:34.594 "ddgst": false, 00:17:34.594 "psk": "key0", 00:17:34.594 "allow_unrecognized_csi": false, 00:17:34.594 "method": "bdev_nvme_attach_controller", 00:17:34.594 "req_id": 1 00:17:34.594 } 00:17:34.594 Got JSON-RPC error response 00:17:34.594 response: 00:17:34.594 { 00:17:34.594 "code": -126, 00:17:34.594 "message": "Required key not available" 00:17:34.594 } 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 76494 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76494 ']' 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76494 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76494 00:17:34.594 killing process with pid 76494 00:17:34.594 Received shutdown signal, test time was about 10.000000 seconds 00:17:34.594 00:17:34.594 Latency(us) 00:17:34.594 [2024-12-13T09:19:28.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.594 [2024-12-13T09:19:28.484Z] =================================================================================================================== 00:17:34.594 [2024-12-13T09:19:28.484Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76494' 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76494 00:17:34.594 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76494 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 76288 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76288 ']' 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76288 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76288 00:17:35.533 killing process with pid 76288 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76288' 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76288 00:17:35.533 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76288 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76553 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76553 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76553 ']' 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.913 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.913 [2024-12-13 09:19:30.525354] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:36.913 [2024-12-13 09:19:30.525523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.913 [2024-12-13 09:19:30.710706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.173 [2024-12-13 09:19:30.804931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.173 [2024-12-13 09:19:30.805014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.173 [2024-12-13 09:19:30.805047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.173 [2024-12-13 09:19:30.805071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.173 [2024-12-13 09:19:30.805098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.173 [2024-12-13 09:19:30.806354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.173 [2024-12-13 09:19:30.977558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.EcOWTTjkkG 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.EcOWTTjkkG 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.EcOWTTjkkG 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EcOWTTjkkG 00:17:37.742 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:38.002 [2024-12-13 09:19:31.816391] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.002 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:38.261 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:38.520 [2024-12-13 09:19:32.328651] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:38.520 [2024-12-13 09:19:32.329092] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:38.520 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:38.779 malloc0 00:17:39.038 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:39.297 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EcOWTTjkkG 00:17:39.297 [2024-12-13 09:19:33.172595] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EcOWTTjkkG': 0100666 00:17:39.297 [2024-12-13 09:19:33.172678] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:39.297 request: 00:17:39.297 { 00:17:39.297 "name": "key0", 00:17:39.297 "path": "/tmp/tmp.EcOWTTjkkG", 00:17:39.297 "method": "keyring_file_add_key", 00:17:39.297 "req_id": 1 00:17:39.297 } 00:17:39.297 Got JSON-RPC error response 00:17:39.297 response: 00:17:39.297 { 00:17:39.297 "code": -1, 00:17:39.297 "message": "Operation not permitted" 00:17:39.297 } 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:39.556 [2024-12-13 09:19:33.416769] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:39.556 [2024-12-13 09:19:33.416907] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:39.556 request: 00:17:39.556 { 00:17:39.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.556 "host": "nqn.2016-06.io.spdk:host1", 00:17:39.556 "psk": "key0", 00:17:39.556 "method": "nvmf_subsystem_add_host", 00:17:39.556 "req_id": 1 00:17:39.556 } 00:17:39.556 Got JSON-RPC error response 00:17:39.556 response: 00:17:39.556 { 00:17:39.556 "code": -32603, 00:17:39.556 "message": "Internal error" 00:17:39.556 } 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 76553 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76553 ']' 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76553 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.556 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76553 00:17:39.815 killing process with pid 76553 00:17:39.815 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:39.815 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:39.815 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76553' 00:17:39.815 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76553 00:17:39.815 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76553 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.EcOWTTjkkG 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76629 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76629 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76629 ']' 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.753 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.753 [2024-12-13 09:19:34.531657] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:40.753 [2024-12-13 09:19:34.531830] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.012 [2024-12-13 09:19:34.703080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.012 [2024-12-13 09:19:34.786990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.012 [2024-12-13 09:19:34.787062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.012 [2024-12-13 09:19:34.787079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.012 [2024-12-13 09:19:34.787113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.012 [2024-12-13 09:19:34.787126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.012 [2024-12-13 09:19:34.788246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.272 [2024-12-13 09:19:34.940385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:41.839 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.839 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:41.839 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.839 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.839 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.839 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.839 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.EcOWTTjkkG 00:17:41.839 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EcOWTTjkkG 00:17:41.839 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:41.839 [2024-12-13 09:19:35.685061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.839 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:42.413 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:42.413 [2024-12-13 09:19:36.261160] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:42.413 [2024-12-13 09:19:36.261550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:42.413 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:42.702 malloc0 00:17:42.702 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:42.961 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EcOWTTjkkG 00:17:43.220 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:43.480 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=76690 00:17:43.480 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.480 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.480 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 76690 /var/tmp/bdevperf.sock 00:17:43.480 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76690 ']' 00:17:43.480 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.480 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.480 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.480 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.480 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.739 [2024-12-13 09:19:37.387468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:43.739 [2024-12-13 09:19:37.387612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76690 ] 00:17:43.739 [2024-12-13 09:19:37.562312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.998 [2024-12-13 09:19:37.686186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.998 [2024-12-13 09:19:37.846062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:44.567 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.567 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:44.567 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EcOWTTjkkG 00:17:44.826 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:45.085 [2024-12-13 09:19:38.870104] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.085 TLSTESTn1 00:17:45.085 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:45.654 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:45.654 "subsystems": [ 00:17:45.654 { 00:17:45.654 "subsystem": "keyring", 00:17:45.654 "config": [ 00:17:45.654 { 00:17:45.654 "method": "keyring_file_add_key", 00:17:45.654 "params": { 00:17:45.654 "name": "key0", 00:17:45.654 "path": "/tmp/tmp.EcOWTTjkkG" 00:17:45.654 } 00:17:45.654 } 00:17:45.654 ] 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "subsystem": "iobuf", 00:17:45.654 "config": [ 00:17:45.654 { 00:17:45.654 "method": "iobuf_set_options", 00:17:45.654 "params": { 00:17:45.654 "small_pool_count": 8192, 00:17:45.654 "large_pool_count": 1024, 00:17:45.654 "small_bufsize": 8192, 00:17:45.654 "large_bufsize": 135168, 00:17:45.654 "enable_numa": false 00:17:45.654 } 00:17:45.654 } 00:17:45.654 ] 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "subsystem": "sock", 00:17:45.654 "config": [ 00:17:45.654 { 00:17:45.654 "method": "sock_set_default_impl", 00:17:45.654 "params": { 00:17:45.654 "impl_name": "uring" 00:17:45.654 } 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "method": "sock_impl_set_options", 00:17:45.654 "params": { 00:17:45.654 "impl_name": "ssl", 00:17:45.654 "recv_buf_size": 4096, 00:17:45.654 "send_buf_size": 4096, 00:17:45.654 "enable_recv_pipe": true, 00:17:45.654 "enable_quickack": false, 00:17:45.654 "enable_placement_id": 0, 00:17:45.654 "enable_zerocopy_send_server": true, 00:17:45.654 "enable_zerocopy_send_client": false, 00:17:45.654 "zerocopy_threshold": 0, 00:17:45.654 "tls_version": 0, 00:17:45.654 "enable_ktls": false 00:17:45.654 } 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "method": "sock_impl_set_options", 00:17:45.654 "params": { 00:17:45.654 "impl_name": "posix", 00:17:45.654 "recv_buf_size": 2097152, 00:17:45.654 "send_buf_size": 2097152, 00:17:45.654 "enable_recv_pipe": true, 00:17:45.654 "enable_quickack": false, 00:17:45.654 "enable_placement_id": 0, 00:17:45.654 "enable_zerocopy_send_server": true, 00:17:45.654 "enable_zerocopy_send_client": false, 00:17:45.654 "zerocopy_threshold": 0, 00:17:45.654 "tls_version": 0, 00:17:45.654 "enable_ktls": false 00:17:45.654 } 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "method": "sock_impl_set_options", 00:17:45.654 "params": { 00:17:45.654 "impl_name": "uring", 00:17:45.654 "recv_buf_size": 2097152, 00:17:45.654 "send_buf_size": 2097152, 00:17:45.654 "enable_recv_pipe": true, 00:17:45.654 "enable_quickack": false, 00:17:45.654 "enable_placement_id": 0, 00:17:45.654 "enable_zerocopy_send_server": false, 00:17:45.654 "enable_zerocopy_send_client": false, 00:17:45.654 "zerocopy_threshold": 0, 00:17:45.654 "tls_version": 0, 00:17:45.654 "enable_ktls": false 00:17:45.654 } 00:17:45.654 } 00:17:45.654 ] 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "subsystem": "vmd", 00:17:45.654 "config": [] 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "subsystem": "accel", 00:17:45.654 "config": [ 00:17:45.654 { 00:17:45.654 "method": "accel_set_options", 00:17:45.654 "params": { 00:17:45.654 "small_cache_size": 128, 00:17:45.654 "large_cache_size": 16, 00:17:45.654 "task_count": 2048, 00:17:45.654 "sequence_count": 2048, 00:17:45.654 "buf_count": 2048 00:17:45.654 } 00:17:45.654 } 00:17:45.654 ] 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "subsystem": "bdev", 00:17:45.654 "config": [ 00:17:45.654 { 00:17:45.654 "method": "bdev_set_options", 00:17:45.654 "params": { 00:17:45.654 "bdev_io_pool_size": 65535, 00:17:45.654 "bdev_io_cache_size": 256, 00:17:45.654 "bdev_auto_examine": true, 00:17:45.655 "iobuf_small_cache_size": 128, 00:17:45.655 "iobuf_large_cache_size": 16 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "bdev_raid_set_options", 00:17:45.655 "params": { 00:17:45.655 "process_window_size_kb": 1024, 00:17:45.655 "process_max_bandwidth_mb_sec": 0 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "bdev_iscsi_set_options", 00:17:45.655 "params": { 00:17:45.655 "timeout_sec": 30 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "bdev_nvme_set_options", 00:17:45.655 "params": { 00:17:45.655 "action_on_timeout": "none", 00:17:45.655 "timeout_us": 0, 00:17:45.655 "timeout_admin_us": 0, 00:17:45.655 "keep_alive_timeout_ms": 10000, 00:17:45.655 "arbitration_burst": 0, 00:17:45.655 "low_priority_weight": 0, 00:17:45.655 "medium_priority_weight": 0, 00:17:45.655 "high_priority_weight": 0, 00:17:45.655 "nvme_adminq_poll_period_us": 10000, 00:17:45.655 "nvme_ioq_poll_period_us": 0, 00:17:45.655 "io_queue_requests": 0, 00:17:45.655 "delay_cmd_submit": true, 00:17:45.655 "transport_retry_count": 4, 00:17:45.655 "bdev_retry_count": 3, 00:17:45.655 "transport_ack_timeout": 0, 00:17:45.655 "ctrlr_loss_timeout_sec": 0, 00:17:45.655 "reconnect_delay_sec": 0, 00:17:45.655 "fast_io_fail_timeout_sec": 0, 00:17:45.655 "disable_auto_failback": false, 00:17:45.655 "generate_uuids": false, 00:17:45.655 "transport_tos": 0, 00:17:45.655 "nvme_error_stat": false, 00:17:45.655 "rdma_srq_size": 0, 00:17:45.655 "io_path_stat": false, 00:17:45.655 "allow_accel_sequence": false, 00:17:45.655 "rdma_max_cq_size": 0, 00:17:45.655 "rdma_cm_event_timeout_ms": 0, 00:17:45.655 "dhchap_digests": [ 00:17:45.655 "sha256", 00:17:45.655 "sha384", 00:17:45.655 "sha512" 00:17:45.655 ], 00:17:45.655 "dhchap_dhgroups": [ 00:17:45.655 "null", 00:17:45.655 "ffdhe2048", 00:17:45.655 "ffdhe3072", 00:17:45.655 "ffdhe4096", 00:17:45.655 "ffdhe6144", 00:17:45.655 "ffdhe8192" 00:17:45.655 ], 00:17:45.655 "rdma_umr_per_io": false 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "bdev_nvme_set_hotplug", 00:17:45.655 "params": { 00:17:45.655 "period_us": 100000, 00:17:45.655 "enable": false 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "bdev_malloc_create", 00:17:45.655 "params": { 00:17:45.655 "name": "malloc0", 00:17:45.655 "num_blocks": 8192, 00:17:45.655 "block_size": 4096, 00:17:45.655 "physical_block_size": 4096, 00:17:45.655 "uuid": "ab12f185-7129-4264-9409-fa7aba1a79f0", 00:17:45.655 "optimal_io_boundary": 0, 00:17:45.655 "md_size": 0, 00:17:45.655 "dif_type": 0, 00:17:45.655 "dif_is_head_of_md": false, 00:17:45.655 "dif_pi_format": 0 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "bdev_wait_for_examine" 00:17:45.655 } 00:17:45.655 ] 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "subsystem": "nbd", 00:17:45.655 "config": [] 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "subsystem": "scheduler", 00:17:45.655 "config": [ 00:17:45.655 { 00:17:45.655 "method": "framework_set_scheduler", 00:17:45.655 "params": { 00:17:45.655 "name": "static" 00:17:45.655 } 00:17:45.655 } 00:17:45.655 ] 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "subsystem": "nvmf", 00:17:45.655 "config": [ 00:17:45.655 { 00:17:45.655 "method": "nvmf_set_config", 00:17:45.655 "params": { 00:17:45.655 "discovery_filter": "match_any", 00:17:45.655 "admin_cmd_passthru": { 00:17:45.655 "identify_ctrlr": false 00:17:45.655 }, 00:17:45.655 "dhchap_digests": [ 00:17:45.655 "sha256", 00:17:45.655 "sha384", 00:17:45.655 "sha512" 00:17:45.655 ], 00:17:45.655 "dhchap_dhgroups": [ 00:17:45.655 "null", 00:17:45.655 "ffdhe2048", 00:17:45.655 "ffdhe3072", 00:17:45.655 "ffdhe4096", 00:17:45.655 "ffdhe6144", 00:17:45.655 "ffdhe8192" 00:17:45.655 ] 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "nvmf_set_max_subsystems", 00:17:45.655 "params": { 00:17:45.655 "max_subsystems": 1024 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "nvmf_set_crdt", 00:17:45.655 "params": { 00:17:45.655 "crdt1": 0, 00:17:45.655 "crdt2": 0, 00:17:45.655 "crdt3": 0 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "nvmf_create_transport", 00:17:45.655 "params": { 00:17:45.655 "trtype": "TCP", 00:17:45.655 "max_queue_depth": 128, 00:17:45.655 "max_io_qpairs_per_ctrlr": 127, 00:17:45.655 "in_capsule_data_size": 4096, 00:17:45.655 "max_io_size": 131072, 00:17:45.655 "io_unit_size": 131072, 00:17:45.655 "max_aq_depth": 128, 00:17:45.655 "num_shared_buffers": 511, 00:17:45.655 "buf_cache_size": 4294967295, 00:17:45.655 "dif_insert_or_strip": false, 00:17:45.655 "zcopy": false, 00:17:45.655 "c2h_success": false, 00:17:45.655 "sock_priority": 0, 00:17:45.655 "abort_timeout_sec": 1, 00:17:45.655 "ack_timeout": 0, 00:17:45.655 "data_wr_pool_size": 0 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "nvmf_create_subsystem", 00:17:45.655 "params": { 00:17:45.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.655 "allow_any_host": false, 00:17:45.655 "serial_number": "SPDK00000000000001", 00:17:45.655 "model_number": "SPDK bdev Controller", 00:17:45.655 "max_namespaces": 10, 00:17:45.655 "min_cntlid": 1, 00:17:45.655 "max_cntlid": 65519, 00:17:45.655 "ana_reporting": false 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "nvmf_subsystem_add_host", 00:17:45.655 "params": { 00:17:45.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.655 "host": "nqn.2016-06.io.spdk:host1", 00:17:45.655 "psk": "key0" 00:17:45.655 } 00:17:45.655 }, 00:17:45.655 { 00:17:45.655 "method": "nvmf_subsystem_add_ns", 00:17:45.655 "params": { 00:17:45.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.655 "namespace": { 00:17:45.655 "nsid": 1, 00:17:45.655 "bdev_name": "malloc0", 00:17:45.655 "nguid": "AB12F185712942649409FA7ABA1A79F0", 00:17:45.655 "uuid": "ab12f185-7129-4264-9409-fa7aba1a79f0", 00:17:45.655 "no_auto_visible": false 00:17:45.655 } 00:17:45.655 } 00:17:45.656 }, 00:17:45.656 { 00:17:45.656 "method": "nvmf_subsystem_add_listener", 00:17:45.656 "params": { 00:17:45.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.656 "listen_address": { 00:17:45.656 "trtype": "TCP", 00:17:45.656 "adrfam": "IPv4", 00:17:45.656 "traddr": "10.0.0.3", 00:17:45.656 "trsvcid": "4420" 00:17:45.656 }, 00:17:45.656 "secure_channel": true 00:17:45.656 } 00:17:45.656 } 00:17:45.656 ] 00:17:45.656 } 00:17:45.656 ] 00:17:45.656 }' 00:17:45.656 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:45.916 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:45.916 "subsystems": [ 00:17:45.916 { 00:17:45.916 "subsystem": "keyring", 00:17:45.916 "config": [ 00:17:45.916 { 00:17:45.916 "method": "keyring_file_add_key", 00:17:45.916 "params": { 00:17:45.916 "name": "key0", 00:17:45.916 "path": "/tmp/tmp.EcOWTTjkkG" 00:17:45.916 } 00:17:45.916 } 00:17:45.916 ] 00:17:45.916 }, 00:17:45.916 { 00:17:45.916 "subsystem": "iobuf", 00:17:45.916 "config": [ 00:17:45.916 { 00:17:45.916 "method": "iobuf_set_options", 00:17:45.916 "params": { 00:17:45.916 "small_pool_count": 8192, 00:17:45.916 "large_pool_count": 1024, 00:17:45.916 "small_bufsize": 8192, 00:17:45.916 "large_bufsize": 135168, 00:17:45.916 "enable_numa": false 00:17:45.916 } 00:17:45.916 } 00:17:45.916 ] 00:17:45.916 }, 00:17:45.916 { 00:17:45.916 "subsystem": "sock", 00:17:45.916 "config": [ 00:17:45.916 { 00:17:45.916 "method": "sock_set_default_impl", 00:17:45.916 "params": { 00:17:45.916 "impl_name": "uring" 00:17:45.916 } 00:17:45.916 }, 00:17:45.916 { 00:17:45.917 "method": "sock_impl_set_options", 00:17:45.917 "params": { 00:17:45.917 "impl_name": "ssl", 00:17:45.917 "recv_buf_size": 4096, 00:17:45.917 "send_buf_size": 4096, 00:17:45.917 "enable_recv_pipe": true, 00:17:45.917 "enable_quickack": false, 00:17:45.917 "enable_placement_id": 0, 00:17:45.917 "enable_zerocopy_send_server": true, 00:17:45.917 "enable_zerocopy_send_client": false, 00:17:45.917 "zerocopy_threshold": 0, 00:17:45.917 "tls_version": 0, 00:17:45.917 "enable_ktls": false 00:17:45.917 } 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "method": "sock_impl_set_options", 00:17:45.917 "params": { 00:17:45.917 "impl_name": "posix", 00:17:45.917 "recv_buf_size": 2097152, 00:17:45.917 "send_buf_size": 2097152, 00:17:45.917 "enable_recv_pipe": true, 00:17:45.917 "enable_quickack": false, 00:17:45.917 "enable_placement_id": 0, 00:17:45.917 "enable_zerocopy_send_server": true, 00:17:45.917 "enable_zerocopy_send_client": false, 00:17:45.917 "zerocopy_threshold": 0, 00:17:45.917 "tls_version": 0, 00:17:45.917 "enable_ktls": false 00:17:45.917 } 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "method": "sock_impl_set_options", 00:17:45.917 "params": { 00:17:45.917 "impl_name": "uring", 00:17:45.917 "recv_buf_size": 2097152, 00:17:45.917 "send_buf_size": 2097152, 00:17:45.917 "enable_recv_pipe": true, 00:17:45.917 "enable_quickack": false, 00:17:45.917 "enable_placement_id": 0, 00:17:45.917 "enable_zerocopy_send_server": false, 00:17:45.917 "enable_zerocopy_send_client": false, 00:17:45.917 "zerocopy_threshold": 0, 00:17:45.917 "tls_version": 0, 00:17:45.917 "enable_ktls": false 00:17:45.917 } 00:17:45.917 } 00:17:45.917 ] 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "subsystem": "vmd", 00:17:45.917 "config": [] 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "subsystem": "accel", 00:17:45.917 "config": [ 00:17:45.917 { 00:17:45.917 "method": "accel_set_options", 00:17:45.917 "params": { 00:17:45.917 "small_cache_size": 128, 00:17:45.917 "large_cache_size": 16, 00:17:45.917 "task_count": 2048, 00:17:45.917 "sequence_count": 2048, 00:17:45.917 "buf_count": 2048 00:17:45.917 } 00:17:45.917 } 00:17:45.917 ] 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "subsystem": "bdev", 00:17:45.917 "config": [ 00:17:45.917 { 00:17:45.917 "method": "bdev_set_options", 00:17:45.917 "params": { 00:17:45.917 "bdev_io_pool_size": 65535, 00:17:45.917 "bdev_io_cache_size": 256, 00:17:45.917 "bdev_auto_examine": true, 00:17:45.917 "iobuf_small_cache_size": 128, 00:17:45.917 "iobuf_large_cache_size": 16 00:17:45.917 } 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "method": "bdev_raid_set_options", 00:17:45.917 "params": { 00:17:45.917 "process_window_size_kb": 1024, 00:17:45.917 "process_max_bandwidth_mb_sec": 0 00:17:45.917 } 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "method": "bdev_iscsi_set_options", 00:17:45.917 "params": { 00:17:45.917 "timeout_sec": 30 00:17:45.917 } 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "method": "bdev_nvme_set_options", 00:17:45.917 "params": { 00:17:45.917 "action_on_timeout": "none", 00:17:45.917 "timeout_us": 0, 00:17:45.917 "timeout_admin_us": 0, 00:17:45.917 "keep_alive_timeout_ms": 10000, 00:17:45.917 "arbitration_burst": 0, 00:17:45.917 "low_priority_weight": 0, 00:17:45.917 "medium_priority_weight": 0, 00:17:45.917 "high_priority_weight": 0, 00:17:45.917 "nvme_adminq_poll_period_us": 10000, 00:17:45.917 "nvme_ioq_poll_period_us": 0, 00:17:45.917 "io_queue_requests": 512, 00:17:45.917 "delay_cmd_submit": true, 00:17:45.917 "transport_retry_count": 4, 00:17:45.917 "bdev_retry_count": 3, 00:17:45.917 "transport_ack_timeout": 0, 00:17:45.917 "ctrlr_loss_timeout_sec": 0, 00:17:45.917 "reconnect_delay_sec": 0, 00:17:45.917 "fast_io_fail_timeout_sec": 0, 00:17:45.917 "disable_auto_failback": false, 00:17:45.917 "generate_uuids": false, 00:17:45.917 "transport_tos": 0, 00:17:45.917 "nvme_error_stat": false, 00:17:45.917 "rdma_srq_size": 0, 00:17:45.917 "io_path_stat": false, 00:17:45.917 "allow_accel_sequence": false, 00:17:45.917 "rdma_max_cq_size": 0, 00:17:45.917 "rdma_cm_event_timeout_ms": 0, 00:17:45.917 "dhchap_digests": [ 00:17:45.917 "sha256", 00:17:45.917 "sha384", 00:17:45.917 "sha512" 00:17:45.917 ], 00:17:45.917 "dhchap_dhgroups": [ 00:17:45.917 "null", 00:17:45.917 "ffdhe2048", 00:17:45.917 "ffdhe3072", 00:17:45.917 "ffdhe4096", 00:17:45.917 "ffdhe6144", 00:17:45.917 "ffdhe8192" 00:17:45.917 ], 00:17:45.917 "rdma_umr_per_io": false 00:17:45.917 } 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "method": "bdev_nvme_attach_controller", 00:17:45.917 "params": { 00:17:45.917 "name": "TLSTEST", 00:17:45.917 "trtype": "TCP", 00:17:45.917 "adrfam": "IPv4", 00:17:45.917 "traddr": "10.0.0.3", 00:17:45.917 "trsvcid": "4420", 00:17:45.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.917 "prchk_reftag": false, 00:17:45.917 "prchk_guard": false, 00:17:45.917 "ctrlr_loss_timeout_sec": 0, 00:17:45.917 "reconnect_delay_sec": 0, 00:17:45.917 "fast_io_fail_timeout_sec": 0, 00:17:45.917 "psk": "key0", 00:17:45.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.917 "hdgst": false, 00:17:45.917 "ddgst": false, 00:17:45.917 "multipath": "multipath" 00:17:45.917 } 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "method": "bdev_nvme_set_hotplug", 00:17:45.917 "params": { 00:17:45.917 "period_us": 100000, 00:17:45.917 "enable": false 00:17:45.917 } 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "method": "bdev_wait_for_examine" 00:17:45.917 } 00:17:45.917 ] 00:17:45.917 }, 00:17:45.917 { 00:17:45.917 "subsystem": "nbd", 00:17:45.917 "config": [] 00:17:45.917 } 00:17:45.917 ] 00:17:45.917 }' 00:17:45.917 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 76690 00:17:45.917 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76690 ']' 00:17:45.917 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76690 00:17:45.917 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:45.917 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.917 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76690 00:17:45.917 killing process with pid 76690 00:17:45.917 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.917 00:17:45.918 Latency(us) 00:17:45.918 [2024-12-13T09:19:39.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.918 [2024-12-13T09:19:39.808Z] =================================================================================================================== 00:17:45.918 [2024-12-13T09:19:39.808Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.918 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:45.918 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:45.918 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76690' 00:17:45.918 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76690 00:17:45.918 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76690 00:17:46.855 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 76629 00:17:46.856 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76629 ']' 00:17:46.856 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76629 00:17:46.856 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:46.856 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.856 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76629 00:17:46.856 killing process with pid 76629 00:17:46.856 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:46.856 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:46.856 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76629' 00:17:46.856 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76629 00:17:46.856 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76629 00:17:47.792 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:47.792 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:47.793 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.793 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:47.793 "subsystems": [ 00:17:47.793 { 00:17:47.793 "subsystem": "keyring", 00:17:47.793 "config": [ 00:17:47.793 { 00:17:47.793 "method": "keyring_file_add_key", 00:17:47.793 "params": { 00:17:47.793 "name": "key0", 00:17:47.793 "path": "/tmp/tmp.EcOWTTjkkG" 00:17:47.793 } 00:17:47.793 } 00:17:47.793 ] 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "subsystem": "iobuf", 00:17:47.793 "config": [ 00:17:47.793 { 00:17:47.793 "method": "iobuf_set_options", 00:17:47.793 "params": { 00:17:47.793 "small_pool_count": 8192, 00:17:47.793 "large_pool_count": 1024, 00:17:47.793 "small_bufsize": 8192, 00:17:47.793 "large_bufsize": 135168, 00:17:47.793 "enable_numa": false 00:17:47.793 } 00:17:47.793 } 00:17:47.793 ] 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "subsystem": "sock", 00:17:47.793 "config": [ 00:17:47.793 { 00:17:47.793 "method": "sock_set_default_impl", 00:17:47.793 "params": { 00:17:47.793 "impl_name": "uring" 00:17:47.793 } 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "method": "sock_impl_set_options", 00:17:47.793 "params": { 00:17:47.793 "impl_name": "ssl", 00:17:47.793 "recv_buf_size": 4096, 00:17:47.793 "send_buf_size": 4096, 00:17:47.793 "enable_recv_pipe": true, 00:17:47.793 "enable_quickack": false, 00:17:47.793 "enable_placement_id": 0, 00:17:47.793 "enable_zerocopy_send_server": true, 00:17:47.793 "enable_zerocopy_send_client": false, 00:17:47.793 "zerocopy_threshold": 0, 00:17:47.793 "tls_version": 0, 00:17:47.793 "enable_ktls": false 00:17:47.793 } 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "method": "sock_impl_set_options", 00:17:47.793 "params": { 00:17:47.793 "impl_name": "posix", 00:17:47.793 "recv_buf_size": 2097152, 00:17:47.793 "send_buf_size": 2097152, 00:17:47.793 "enable_recv_pipe": true, 00:17:47.793 "enable_quickack": false, 00:17:47.793 "enable_placement_id": 0, 00:17:47.793 "enable_zerocopy_send_server": true, 00:17:47.793 "enable_zerocopy_send_client": false, 00:17:47.793 "zerocopy_threshold": 0, 00:17:47.793 "tls_version": 0, 00:17:47.793 "enable_ktls": false 00:17:47.793 } 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "method": "sock_impl_set_options", 00:17:47.793 "params": { 00:17:47.793 "impl_name": "uring", 00:17:47.793 "recv_buf_size": 2097152, 00:17:47.793 "send_buf_size": 2097152, 00:17:47.793 "enable_recv_pipe": true, 00:17:47.793 "enable_quickack": false, 00:17:47.793 "enable_placement_id": 0, 00:17:47.793 "enable_zerocopy_send_server": false, 00:17:47.793 "enable_zerocopy_send_client": false, 00:17:47.793 "zerocopy_threshold": 0, 00:17:47.793 "tls_version": 0, 00:17:47.793 "enable_ktls": false 00:17:47.793 } 00:17:47.793 } 00:17:47.793 ] 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "subsystem": "vmd", 00:17:47.793 "config": [] 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "subsystem": "accel", 00:17:47.793 "config": [ 00:17:47.793 { 00:17:47.793 "method": "accel_set_options", 00:17:47.793 "params": { 00:17:47.793 "small_cache_size": 128, 00:17:47.793 "large_cache_size": 16, 00:17:47.793 "task_count": 2048, 00:17:47.793 "sequence_count": 2048, 00:17:47.793 "buf_count": 2048 00:17:47.793 } 00:17:47.793 } 00:17:47.793 ] 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "subsystem": "bdev", 00:17:47.793 "config": [ 00:17:47.793 { 00:17:47.793 "method": "bdev_set_options", 00:17:47.793 "params": { 00:17:47.793 "bdev_io_pool_size": 65535, 00:17:47.793 "bdev_io_cache_size": 256, 00:17:47.793 "bdev_auto_examine": true, 00:17:47.793 "iobuf_small_cache_size": 128, 00:17:47.793 "iobuf_large_cache_size": 16 00:17:47.793 } 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "method": "bdev_raid_set_options", 00:17:47.793 "params": { 00:17:47.793 "process_window_size_kb": 1024, 00:17:47.793 "process_max_bandwidth_mb_sec": 0 00:17:47.793 } 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "method": "bdev_iscsi_set_options", 00:17:47.793 "params": { 00:17:47.793 "timeout_sec": 30 00:17:47.793 } 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "method": "bdev_nvme_set_options", 00:17:47.793 "params": { 00:17:47.793 "action_on_timeout": "none", 00:17:47.793 "timeout_us": 0, 00:17:47.793 "timeout_admin_us": 0, 00:17:47.793 "keep_alive_timeout_ms": 10000, 00:17:47.793 "arbitration_burst": 0, 00:17:47.793 "low_priority_weight": 0, 00:17:47.793 "medium_priority_weight": 0, 00:17:47.793 "high_priority_weight": 0, 00:17:47.793 "nvme_adminq_poll_period_us": 10000, 00:17:47.793 "nvme_ioq_poll_period_us": 0, 00:17:47.793 "io_queue_requests": 0, 00:17:47.793 "delay_cmd_submit": true, 00:17:47.793 "transport_retry_count": 4, 00:17:47.793 "bdev_retry_count": 3, 00:17:47.793 "transport_ack_timeout": 0, 00:17:47.793 "ctrlr_loss_timeout_sec": 0, 00:17:47.793 "reconnect_delay_sec": 0, 00:17:47.793 "fast_io_fail_timeout_sec": 0, 00:17:47.793 "disable_auto_failback": false, 00:17:47.793 "generate_uuids": false, 00:17:47.793 "transport_tos": 0, 00:17:47.793 "nvme_error_stat": false, 00:17:47.793 "rdma_srq_size": 0, 00:17:47.793 "io_path_stat": false, 00:17:47.793 "allow_accel_sequence": false, 00:17:47.793 "rdma_max_cq_size": 0, 00:17:47.793 "rdma_cm_event_timeout_ms": 0, 00:17:47.793 "dhchap_digests": [ 00:17:47.793 "sha256", 00:17:47.793 "sha384", 00:17:47.793 "sha512" 00:17:47.793 ], 00:17:47.793 "dhchap_dhgroups": [ 00:17:47.793 "null", 00:17:47.793 "ffdhe2048", 00:17:47.793 "ffdhe3072", 00:17:47.793 "ffdhe4096", 00:17:47.793 "ffdhe6144", 00:17:47.793 "ffdhe8192" 00:17:47.793 ], 00:17:47.793 "rdma_umr_per_io": false 00:17:47.793 } 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "method": "bdev_nvme_set_hotplug", 00:17:47.793 "params": { 00:17:47.793 "period_us": 100000, 00:17:47.793 "enable": false 00:17:47.793 } 00:17:47.793 }, 00:17:47.793 { 00:17:47.793 "method": "bdev_malloc_create", 00:17:47.793 "params": { 00:17:47.793 "name": "malloc0", 00:17:47.793 "num_blocks": 8192, 00:17:47.793 "block_size": 4096, 00:17:47.794 "physical_block_size": 4096, 00:17:47.794 "uuid": "ab12f185-7129-4264-9409-fa7aba1a79f0", 00:17:47.794 "optimal_io_boundary": 0, 00:17:47.794 "md_size": 0, 00:17:47.794 "dif_type": 0, 00:17:47.794 "dif_is_head_of_md": false, 00:17:47.794 "dif_pi_format": 0 00:17:47.794 } 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "method": "bdev_wait_for_examine" 00:17:47.794 } 00:17:47.794 ] 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "subsystem": "nbd", 00:17:47.794 "config": [] 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "subsystem": "scheduler", 00:17:47.794 "config": [ 00:17:47.794 { 00:17:47.794 "method": "framework_set_scheduler", 00:17:47.794 "params": { 00:17:47.794 "name": "static" 00:17:47.794 } 00:17:47.794 } 00:17:47.794 ] 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "subsystem": "nvmf", 00:17:47.794 "config": [ 00:17:47.794 { 00:17:47.794 "method": "nvmf_set_config", 00:17:47.794 "params": { 00:17:47.794 "discovery_filter": "match_any", 00:17:47.794 "admin_cmd_passthru": { 00:17:47.794 "identify_ctrlr": false 00:17:47.794 }, 00:17:47.794 "dhchap_digests": [ 00:17:47.794 "sha256", 00:17:47.794 "sha384", 00:17:47.794 "sha512" 00:17:47.794 ], 00:17:47.794 "dhchap_dhgroups": [ 00:17:47.794 "null", 00:17:47.794 "ffdhe2048", 00:17:47.794 "ffdhe3072", 00:17:47.794 "ffdhe4096", 00:17:47.794 "ffdhe6144", 00:17:47.794 "ffdhe8192" 00:17:47.794 ] 00:17:47.794 } 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "method": "nvmf_set_max_subsystems", 00:17:47.794 "params": { 00:17:47.794 "max_subsystems": 1024 00:17:47.794 } 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "method": "nvmf_set_crdt", 00:17:47.794 "params": { 00:17:47.794 "crdt1": 0, 00:17:47.794 "crdt2": 0, 00:17:47.794 "crdt3": 0 00:17:47.794 } 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "method": "nvmf_create_transport", 00:17:47.794 "params": { 00:17:47.794 "trtype": "TCP", 00:17:47.794 "max_queue_depth": 128, 00:17:47.794 "max_io_qpairs_per_ctrlr": 127, 00:17:47.794 "in_capsule_data_size": 4096, 00:17:47.794 "max_io_size": 131072, 00:17:47.794 "io_unit_size": 131072, 00:17:47.794 "max_aq_depth": 128, 00:17:47.794 "num_shared_buffers": 511, 00:17:47.794 "buf_cache_size": 4294967295, 00:17:47.794 "dif_insert_or_strip": false, 00:17:47.794 "zcopy": false, 00:17:47.794 "c2h_success": false, 00:17:47.794 "sock_priority": 0, 00:17:47.794 "abort_timeout_sec": 1, 00:17:47.794 "ack_timeout": 0, 00:17:47.794 "data_wr_pool_size": 0 00:17:47.794 } 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "method": "nvmf_create_subsystem", 00:17:47.794 "params": { 00:17:47.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.794 "allow_any_host": false, 00:17:47.794 "serial_number": "SPDK00000000000001", 00:17:47.794 "model_number": "SPDK bdev Controller", 00:17:47.794 "max_namespaces": 10, 00:17:47.794 "min_cntlid": 1, 00:17:47.794 "max_cntlid": 65519, 00:17:47.794 "ana_reporting": false 00:17:47.794 } 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "method": "nvmf_subsystem_add_host", 00:17:47.794 "params": { 00:17:47.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.794 "host": "nqn.2016-06.io.spdk:host1", 00:17:47.794 "psk": "key0" 00:17:47.794 } 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "method": "nvmf_subsystem_add_ns", 00:17:47.794 "params": { 00:17:47.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.794 "namespace": { 00:17:47.794 "nsid": 1, 00:17:47.794 "bdev_name": "malloc0", 00:17:47.794 "nguid": "AB12F185712942649409FA7ABA1A79F0", 00:17:47.794 "uuid": "ab12f185-7129-4264-9409-fa7aba1a79f0", 00:17:47.794 "no_auto_visible": false 00:17:47.794 } 00:17:47.794 } 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "method": "nvmf_subsystem_add_listener", 00:17:47.794 "params": { 00:17:47.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.794 "listen_address": { 00:17:47.794 "trtype": "TCP", 00:17:47.794 "adrfam": "IPv4", 00:17:47.794 "traddr": "10.0.0.3", 00:17:47.794 "trsvcid": "4420" 00:17:47.794 }, 00:17:47.794 "secure_channel": true 00:17:47.794 } 00:17:47.794 } 00:17:47.794 ] 00:17:47.794 } 00:17:47.794 ] 00:17:47.794 }' 00:17:47.794 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.794 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76753 00:17:47.794 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:47.794 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76753 00:17:47.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.794 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76753 ']' 00:17:47.794 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.794 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.794 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.794 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.794 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.794 [2024-12-13 09:19:41.483037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:47.794 [2024-12-13 09:19:41.484241] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.794 [2024-12-13 09:19:41.666084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.053 [2024-12-13 09:19:41.756718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.053 [2024-12-13 09:19:41.756772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.053 [2024-12-13 09:19:41.756807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.053 [2024-12-13 09:19:41.756829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.053 [2024-12-13 09:19:41.756841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.053 [2024-12-13 09:19:41.757965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.312 [2024-12-13 09:19:42.016731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:48.312 [2024-12-13 09:19:42.166130] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.312 [2024-12-13 09:19:42.198127] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.312 [2024-12-13 09:19:42.198495] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=76785 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 76785 /var/tmp/bdevperf.sock 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76785 ']' 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.571 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:48.571 "subsystems": [ 00:17:48.571 { 00:17:48.571 "subsystem": "keyring", 00:17:48.571 "config": [ 00:17:48.571 { 00:17:48.571 "method": "keyring_file_add_key", 00:17:48.571 "params": { 00:17:48.571 "name": "key0", 00:17:48.571 "path": "/tmp/tmp.EcOWTTjkkG" 00:17:48.571 } 00:17:48.571 } 00:17:48.571 ] 00:17:48.571 }, 00:17:48.571 { 00:17:48.571 "subsystem": "iobuf", 00:17:48.571 "config": [ 00:17:48.571 { 00:17:48.571 "method": "iobuf_set_options", 00:17:48.571 "params": { 00:17:48.571 "small_pool_count": 8192, 00:17:48.571 "large_pool_count": 1024, 00:17:48.571 "small_bufsize": 8192, 00:17:48.571 "large_bufsize": 135168, 00:17:48.571 "enable_numa": false 00:17:48.571 } 00:17:48.571 } 00:17:48.571 ] 00:17:48.571 }, 00:17:48.571 { 00:17:48.571 "subsystem": "sock", 00:17:48.571 "config": [ 00:17:48.571 { 00:17:48.571 "method": "sock_set_default_impl", 00:17:48.571 "params": { 00:17:48.571 "impl_name": "uring" 00:17:48.571 } 00:17:48.571 }, 00:17:48.571 { 00:17:48.571 "method": "sock_impl_set_options", 00:17:48.571 "params": { 00:17:48.571 "impl_name": "ssl", 00:17:48.571 "recv_buf_size": 4096, 00:17:48.571 "send_buf_size": 4096, 00:17:48.571 "enable_recv_pipe": true, 00:17:48.571 "enable_quickack": false, 00:17:48.571 "enable_placement_id": 0, 00:17:48.571 "enable_zerocopy_send_server": true, 00:17:48.571 "enable_zerocopy_send_client": false, 00:17:48.571 "zerocopy_threshold": 0, 00:17:48.571 "tls_version": 0, 00:17:48.571 "enable_ktls": false 00:17:48.571 } 00:17:48.571 }, 00:17:48.571 { 00:17:48.571 "method": "sock_impl_set_options", 00:17:48.571 "params": { 00:17:48.571 "impl_name": "posix", 00:17:48.571 "recv_buf_size": 2097152, 00:17:48.571 "send_buf_size": 2097152, 00:17:48.571 "enable_recv_pipe": true, 00:17:48.571 "enable_quickack": false, 00:17:48.571 "enable_placement_id": 0, 00:17:48.571 "enable_zerocopy_send_server": true, 00:17:48.571 "enable_zerocopy_send_client": false, 00:17:48.571 "zerocopy_threshold": 0, 00:17:48.571 "tls_version": 0, 00:17:48.571 "enable_ktls": false 00:17:48.571 } 00:17:48.571 }, 00:17:48.571 { 00:17:48.571 "method": "sock_impl_set_options", 00:17:48.571 "params": { 00:17:48.571 "impl_name": "uring", 00:17:48.571 "recv_buf_size": 2097152, 00:17:48.571 "send_buf_size": 2097152, 00:17:48.571 "enable_recv_pipe": true, 00:17:48.571 "enable_quickack": false, 00:17:48.571 "enable_placement_id": 0, 00:17:48.571 "enable_zerocopy_send_server": false, 00:17:48.571 "enable_zerocopy_send_client": false, 00:17:48.571 "zerocopy_threshold": 0, 00:17:48.572 "tls_version": 0, 00:17:48.572 "enable_ktls": false 00:17:48.572 } 00:17:48.572 } 00:17:48.572 ] 00:17:48.572 }, 00:17:48.572 { 00:17:48.572 "subsystem": "vmd", 00:17:48.572 "config": [] 00:17:48.572 }, 00:17:48.572 { 00:17:48.572 "subsystem": "accel", 00:17:48.572 "config": [ 00:17:48.572 { 00:17:48.572 "method": "accel_set_options", 00:17:48.572 "params": { 00:17:48.572 "small_cache_size": 128, 00:17:48.572 "large_cache_size": 16, 00:17:48.572 "task_count": 2048, 00:17:48.572 "sequence_count": 2048, 00:17:48.572 "buf_count": 2048 00:17:48.572 } 00:17:48.572 } 00:17:48.572 ] 00:17:48.572 }, 00:17:48.572 { 00:17:48.572 "subsystem": "bdev", 00:17:48.572 "config": [ 00:17:48.572 { 00:17:48.572 "method": "bdev_set_options", 00:17:48.572 "params": { 00:17:48.572 "bdev_io_pool_size": 65535, 00:17:48.572 "bdev_io_cache_size": 256, 00:17:48.572 "bdev_auto_examine": true, 00:17:48.572 "iobuf_small_cache_size": 128, 00:17:48.572 "iobuf_large_cache_size": 16 00:17:48.572 } 00:17:48.572 }, 00:17:48.572 { 00:17:48.572 "method": "bdev_raid_set_options", 00:17:48.572 "params": { 00:17:48.572 "process_window_size_kb": 1024, 00:17:48.572 "process_max_bandwidth_mb_sec": 0 00:17:48.572 } 00:17:48.572 }, 00:17:48.572 { 00:17:48.572 "method": "bdev_iscsi_set_options", 00:17:48.572 "params": { 00:17:48.572 "timeout_sec": 30 00:17:48.572 } 00:17:48.572 }, 00:17:48.572 { 00:17:48.572 "method": "bdev_nvme_set_options", 00:17:48.572 "params": { 00:17:48.572 "action_on_timeout": "none", 00:17:48.572 "timeout_us": 0, 00:17:48.572 "timeout_admin_us": 0, 00:17:48.572 "keep_alive_timeout_ms": 10000, 00:17:48.572 "arbitration_burst": 0, 00:17:48.572 "low_priority_weight": 0, 00:17:48.572 "medium_priority_weight": 0, 00:17:48.572 "high_priority_weight": 0, 00:17:48.572 "nvme_adminq_poll_period_us": 10000, 00:17:48.572 "nvme_ioq_poll_period_us": 0, 00:17:48.572 "io_queue_requests": 512, 00:17:48.572 "delay_cmd_submit": true, 00:17:48.572 "transport_retry_count": 4, 00:17:48.572 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.572 "bdev_retry_count": 3, 00:17:48.572 "transport_ack_timeout": 0, 00:17:48.572 "ctrlr_loss_timeout_sec": 0, 00:17:48.572 "reconnect_delay_sec": 0, 00:17:48.572 "fast_io_fail_timeout_sec": 0, 00:17:48.572 "disable_auto_failback": false, 00:17:48.572 "generate_uuids": false, 00:17:48.572 "transport_tos": 0, 00:17:48.572 "nvme_error_stat": false, 00:17:48.572 "rdma_srq_size": 0, 00:17:48.572 "io_path_stat": false, 00:17:48.572 "allow_accel_sequence": false, 00:17:48.572 "rdma_max_cq_size": 0, 00:17:48.572 "rdma_cm_event_timeout_ms": 0, 00:17:48.572 "dhchap_digests": [ 00:17:48.572 "sha256", 00:17:48.572 "sha384", 00:17:48.572 "sha512" 00:17:48.572 ], 00:17:48.572 "dhchap_dhgroups": [ 00:17:48.572 "null", 00:17:48.572 "ffdhe2048", 00:17:48.572 "ffdhe3072", 00:17:48.572 "ffdhe4096", 00:17:48.572 "ffdhe6144", 00:17:48.572 "ffdhe8192" 00:17:48.572 ], 00:17:48.572 "rdma_umr_per_io": false 00:17:48.572 } 00:17:48.572 }, 00:17:48.572 { 00:17:48.572 "method": "bdev_nvme_attach_controller", 00:17:48.572 "params": { 00:17:48.572 "name": "TLSTEST", 00:17:48.572 "trtype": "TCP", 00:17:48.572 "adrfam": "IPv4", 00:17:48.572 "traddr": "10.0.0.3", 00:17:48.572 "trsvcid": "4420", 00:17:48.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.572 "prchk_reftag": false, 00:17:48.572 "prchk_guard": false, 00:17:48.572 "ctrlr_loss_timeout_sec": 0, 00:17:48.572 "reconnect_delay_sec": 0, 00:17:48.572 "fast_io_fail_timeout_sec": 0, 00:17:48.572 "psk": "key0", 00:17:48.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.572 "hdgst": false, 00:17:48.572 "ddgst": false, 00:17:48.572 "multipath": "multipath" 00:17:48.572 } 00:17:48.572 }, 00:17:48.572 { 00:17:48.572 "method": "bdev_nvme_set_hotplug", 00:17:48.572 "params": { 00:17:48.572 "period_us": 100000, 00:17:48.572 "enable": false 00:17:48.572 } 00:17:48.572 }, 00:17:48.572 { 00:17:48.572 "method": "bdev_wait_for_examine" 00:17:48.572 } 00:17:48.572 ] 00:17:48.572 }, 00:17:48.572 { 00:17:48.572 "subsystem": "nbd", 00:17:48.572 "config": [] 00:17:48.572 } 00:17:48.572 ] 00:17:48.572 }' 00:17:48.831 [2024-12-13 09:19:42.537853] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:48.832 [2024-12-13 09:19:42.538322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76785 ] 00:17:48.832 [2024-12-13 09:19:42.718531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.090 [2024-12-13 09:19:42.807669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.350 [2024-12-13 09:19:43.044972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:49.350 [2024-12-13 09:19:43.144673] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.609 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.609 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:49.609 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:49.868 Running I/O for 10 seconds... 00:17:51.741 3200.00 IOPS, 12.50 MiB/s [2024-12-13T09:19:47.008Z] 3258.00 IOPS, 12.73 MiB/s [2024-12-13T09:19:47.946Z] 3200.00 IOPS, 12.50 MiB/s [2024-12-13T09:19:48.882Z] 3186.75 IOPS, 12.45 MiB/s [2024-12-13T09:19:49.819Z] 3174.40 IOPS, 12.40 MiB/s [2024-12-13T09:19:50.756Z] 3173.00 IOPS, 12.39 MiB/s [2024-12-13T09:19:51.694Z] 3163.43 IOPS, 12.36 MiB/s [2024-12-13T09:19:52.667Z] 3166.00 IOPS, 12.37 MiB/s [2024-12-13T09:19:53.604Z] 3157.78 IOPS, 12.34 MiB/s [2024-12-13T09:19:53.864Z] 3148.80 IOPS, 12.30 MiB/s 00:17:59.974 Latency(us) 00:17:59.974 [2024-12-13T09:19:53.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.975 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:59.975 Verification LBA range: start 0x0 length 0x2000 00:17:59.975 TLSTESTn1 : 10.03 3152.08 12.31 0.00 0.00 40520.75 7328.12 27525.12 00:17:59.975 [2024-12-13T09:19:53.865Z] =================================================================================================================== 00:17:59.975 [2024-12-13T09:19:53.865Z] Total : 3152.08 12.31 0.00 0.00 40520.75 7328.12 27525.12 00:17:59.975 { 00:17:59.975 "results": [ 00:17:59.975 { 00:17:59.975 "job": "TLSTESTn1", 00:17:59.975 "core_mask": "0x4", 00:17:59.975 "workload": "verify", 00:17:59.975 "status": "finished", 00:17:59.975 "verify_range": { 00:17:59.975 "start": 0, 00:17:59.975 "length": 8192 00:17:59.975 }, 00:17:59.975 "queue_depth": 128, 00:17:59.975 "io_size": 4096, 00:17:59.975 "runtime": 10.030207, 00:17:59.975 "iops": 3152.078516425434, 00:17:59.975 "mibps": 12.312806704786851, 00:17:59.975 "io_failed": 0, 00:17:59.975 "io_timeout": 0, 00:17:59.975 "avg_latency_us": 40520.751299227086, 00:17:59.975 "min_latency_us": 7328.1163636363635, 00:17:59.975 "max_latency_us": 27525.12 00:17:59.975 } 00:17:59.975 ], 00:17:59.975 "core_count": 1 00:17:59.975 } 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 76785 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76785 ']' 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76785 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76785 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:59.975 killing process with pid 76785 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76785' 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76785 00:17:59.975 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.975 00:17:59.975 Latency(us) 00:17:59.975 [2024-12-13T09:19:53.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.975 [2024-12-13T09:19:53.865Z] =================================================================================================================== 00:17:59.975 [2024-12-13T09:19:53.865Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.975 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76785 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 76753 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76753 ']' 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76753 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76753 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.913 killing process with pid 76753 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76753' 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76753 00:18:00.913 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76753 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76931 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76931 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76931 ']' 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.849 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.849 [2024-12-13 09:19:55.668663] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:01.849 [2024-12-13 09:19:55.668847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.108 [2024-12-13 09:19:55.847426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.108 [2024-12-13 09:19:55.972801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.108 [2024-12-13 09:19:55.972885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.108 [2024-12-13 09:19:55.972913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.108 [2024-12-13 09:19:55.972943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.108 [2024-12-13 09:19:55.972960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.108 [2024-12-13 09:19:55.974424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.368 [2024-12-13 09:19:56.140824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.937 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.937 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:02.938 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:02.938 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.938 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.938 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.938 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.EcOWTTjkkG 00:18:02.938 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EcOWTTjkkG 00:18:02.938 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:03.197 [2024-12-13 09:19:56.890015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.197 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:03.456 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:03.716 [2024-12-13 09:19:57.358161] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.716 [2024-12-13 09:19:57.358536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:03.716 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:03.975 malloc0 00:18:03.975 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:04.234 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EcOWTTjkkG 00:18:04.494 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.754 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=76992 00:18:04.754 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:04.754 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.754 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 76992 /var/tmp/bdevperf.sock 00:18:04.754 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76992 ']' 00:18:04.754 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.754 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.754 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.754 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.754 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.754 [2024-12-13 09:19:58.478654] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:04.754 [2024-12-13 09:19:58.478826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76992 ] 00:18:04.754 [2024-12-13 09:19:58.639139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.013 [2024-12-13 09:19:58.739172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.272 [2024-12-13 09:19:58.904006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:05.532 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.532 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:05.532 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EcOWTTjkkG 00:18:05.791 09:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:06.050 [2024-12-13 09:19:59.900599] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.309 nvme0n1 00:18:06.309 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.309 Running I/O for 1 seconds... 00:18:07.506 3139.00 IOPS, 12.26 MiB/s 00:18:07.506 Latency(us) 00:18:07.506 [2024-12-13T09:20:01.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.506 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:07.506 Verification LBA range: start 0x0 length 0x2000 00:18:07.506 nvme0n1 : 1.03 3162.24 12.35 0.00 0.00 39746.38 7596.22 26095.24 00:18:07.506 [2024-12-13T09:20:01.396Z] =================================================================================================================== 00:18:07.506 [2024-12-13T09:20:01.396Z] Total : 3162.24 12.35 0.00 0.00 39746.38 7596.22 26095.24 00:18:07.506 { 00:18:07.506 "results": [ 00:18:07.506 { 00:18:07.506 "job": "nvme0n1", 00:18:07.506 "core_mask": "0x2", 00:18:07.506 "workload": "verify", 00:18:07.506 "status": "finished", 00:18:07.506 "verify_range": { 00:18:07.506 "start": 0, 00:18:07.506 "length": 8192 00:18:07.506 }, 00:18:07.506 "queue_depth": 128, 00:18:07.506 "io_size": 4096, 00:18:07.506 "runtime": 1.033445, 00:18:07.506 "iops": 3162.238919342587, 00:18:07.506 "mibps": 12.35249577868198, 00:18:07.506 "io_failed": 0, 00:18:07.506 "io_timeout": 0, 00:18:07.506 "avg_latency_us": 39746.38168020474, 00:18:07.506 "min_latency_us": 7596.218181818182, 00:18:07.506 "max_latency_us": 26095.243636363637 00:18:07.506 } 00:18:07.506 ], 00:18:07.506 "core_count": 1 00:18:07.506 } 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 76992 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76992 ']' 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76992 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76992 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:07.506 killing process with pid 76992 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76992' 00:18:07.506 Received shutdown signal, test time was about 1.000000 seconds 00:18:07.506 00:18:07.506 Latency(us) 00:18:07.506 [2024-12-13T09:20:01.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.506 [2024-12-13T09:20:01.396Z] =================================================================================================================== 00:18:07.506 [2024-12-13T09:20:01.396Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76992 00:18:07.506 09:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76992 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 76931 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76931 ']' 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76931 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76931 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.443 killing process with pid 76931 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76931' 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76931 00:18:08.443 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76931 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=77061 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 77061 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77061 ']' 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.381 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.381 [2024-12-13 09:20:03.082392] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:09.381 [2024-12-13 09:20:03.082564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.381 [2024-12-13 09:20:03.264075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.641 [2024-12-13 09:20:03.347036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.641 [2024-12-13 09:20:03.347141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.641 [2024-12-13 09:20:03.347158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.641 [2024-12-13 09:20:03.347179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.641 [2024-12-13 09:20:03.347208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.641 [2024-12-13 09:20:03.348213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.641 [2024-12-13 09:20:03.502338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.210 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.210 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.210 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.210 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.210 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.210 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.210 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:10.210 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.210 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.210 [2024-12-13 09:20:04.065689] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.470 malloc0 00:18:10.470 [2024-12-13 09:20:04.113790] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.470 [2024-12-13 09:20:04.114204] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:10.470 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.470 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=77093 00:18:10.470 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:10.470 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 77093 /var/tmp/bdevperf.sock 00:18:10.470 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77093 ']' 00:18:10.470 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.470 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.470 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.470 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.470 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.470 [2024-12-13 09:20:04.253401] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:10.470 [2024-12-13 09:20:04.253564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77093 ] 00:18:10.729 [2024-12-13 09:20:04.428020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.729 [2024-12-13 09:20:04.513921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.989 [2024-12-13 09:20:04.669722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:11.557 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.557 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:11.557 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EcOWTTjkkG 00:18:11.557 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:11.816 [2024-12-13 09:20:05.645689] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.075 nvme0n1 00:18:12.075 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:12.075 Running I/O for 1 seconds... 00:18:13.013 2984.00 IOPS, 11.66 MiB/s 00:18:13.013 Latency(us) 00:18:13.013 [2024-12-13T09:20:06.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.013 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:13.013 Verification LBA range: start 0x0 length 0x2000 00:18:13.013 nvme0n1 : 1.03 3027.54 11.83 0.00 0.00 41568.95 5659.93 28001.75 00:18:13.013 [2024-12-13T09:20:06.903Z] =================================================================================================================== 00:18:13.013 [2024-12-13T09:20:06.903Z] Total : 3027.54 11.83 0.00 0.00 41568.95 5659.93 28001.75 00:18:13.273 { 00:18:13.273 "results": [ 00:18:13.273 { 00:18:13.273 "job": "nvme0n1", 00:18:13.273 "core_mask": "0x2", 00:18:13.273 "workload": "verify", 00:18:13.273 "status": "finished", 00:18:13.273 "verify_range": { 00:18:13.273 "start": 0, 00:18:13.273 "length": 8192 00:18:13.273 }, 00:18:13.273 "queue_depth": 128, 00:18:13.273 "io_size": 4096, 00:18:13.273 "runtime": 1.027896, 00:18:13.273 "iops": 3027.543642547495, 00:18:13.273 "mibps": 11.826342353701152, 00:18:13.273 "io_failed": 0, 00:18:13.273 "io_timeout": 0, 00:18:13.273 "avg_latency_us": 41568.95334423931, 00:18:13.273 "min_latency_us": 5659.927272727273, 00:18:13.273 "max_latency_us": 28001.745454545453 00:18:13.273 } 00:18:13.273 ], 00:18:13.273 "core_count": 1 00:18:13.273 } 00:18:13.273 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:13.273 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.273 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.273 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.273 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:13.273 "subsystems": [ 00:18:13.273 { 00:18:13.273 "subsystem": "keyring", 00:18:13.273 "config": [ 00:18:13.273 { 00:18:13.273 "method": "keyring_file_add_key", 00:18:13.273 "params": { 00:18:13.273 "name": "key0", 00:18:13.273 "path": "/tmp/tmp.EcOWTTjkkG" 00:18:13.273 } 00:18:13.273 } 00:18:13.273 ] 00:18:13.273 }, 00:18:13.273 { 00:18:13.273 "subsystem": "iobuf", 00:18:13.273 "config": [ 00:18:13.273 { 00:18:13.273 "method": "iobuf_set_options", 00:18:13.273 "params": { 00:18:13.273 "small_pool_count": 8192, 00:18:13.273 "large_pool_count": 1024, 00:18:13.273 "small_bufsize": 8192, 00:18:13.273 "large_bufsize": 135168, 00:18:13.273 "enable_numa": false 00:18:13.273 } 00:18:13.273 } 00:18:13.273 ] 00:18:13.273 }, 00:18:13.273 { 00:18:13.273 "subsystem": "sock", 00:18:13.273 "config": [ 00:18:13.273 { 00:18:13.273 "method": "sock_set_default_impl", 00:18:13.273 "params": { 00:18:13.273 "impl_name": "uring" 00:18:13.273 } 00:18:13.273 }, 00:18:13.273 { 00:18:13.273 "method": "sock_impl_set_options", 00:18:13.273 "params": { 00:18:13.273 "impl_name": "ssl", 00:18:13.273 "recv_buf_size": 4096, 00:18:13.273 "send_buf_size": 4096, 00:18:13.273 "enable_recv_pipe": true, 00:18:13.273 "enable_quickack": false, 00:18:13.273 "enable_placement_id": 0, 00:18:13.273 "enable_zerocopy_send_server": true, 00:18:13.273 "enable_zerocopy_send_client": false, 00:18:13.273 "zerocopy_threshold": 0, 00:18:13.273 "tls_version": 0, 00:18:13.273 "enable_ktls": false 00:18:13.273 } 00:18:13.273 }, 00:18:13.273 { 00:18:13.273 "method": "sock_impl_set_options", 00:18:13.273 "params": { 00:18:13.273 "impl_name": "posix", 00:18:13.273 "recv_buf_size": 2097152, 00:18:13.273 "send_buf_size": 2097152, 00:18:13.273 "enable_recv_pipe": true, 00:18:13.273 "enable_quickack": false, 00:18:13.273 "enable_placement_id": 0, 00:18:13.273 "enable_zerocopy_send_server": true, 00:18:13.273 "enable_zerocopy_send_client": false, 00:18:13.273 "zerocopy_threshold": 0, 00:18:13.273 "tls_version": 0, 00:18:13.273 "enable_ktls": false 00:18:13.273 } 00:18:13.273 }, 00:18:13.273 { 00:18:13.273 "method": "sock_impl_set_options", 00:18:13.273 "params": { 00:18:13.273 "impl_name": "uring", 00:18:13.273 "recv_buf_size": 2097152, 00:18:13.273 "send_buf_size": 2097152, 00:18:13.273 "enable_recv_pipe": true, 00:18:13.273 "enable_quickack": false, 00:18:13.273 "enable_placement_id": 0, 00:18:13.273 "enable_zerocopy_send_server": false, 00:18:13.273 "enable_zerocopy_send_client": false, 00:18:13.273 "zerocopy_threshold": 0, 00:18:13.273 "tls_version": 0, 00:18:13.273 "enable_ktls": false 00:18:13.273 } 00:18:13.273 } 00:18:13.273 ] 00:18:13.273 }, 00:18:13.273 { 00:18:13.273 "subsystem": "vmd", 00:18:13.273 "config": [] 00:18:13.273 }, 00:18:13.273 { 00:18:13.273 "subsystem": "accel", 00:18:13.273 "config": [ 00:18:13.273 { 00:18:13.273 "method": "accel_set_options", 00:18:13.273 "params": { 00:18:13.273 "small_cache_size": 128, 00:18:13.273 "large_cache_size": 16, 00:18:13.273 "task_count": 2048, 00:18:13.273 "sequence_count": 2048, 00:18:13.273 "buf_count": 2048 00:18:13.273 } 00:18:13.273 } 00:18:13.273 ] 00:18:13.273 }, 00:18:13.273 { 00:18:13.273 "subsystem": "bdev", 00:18:13.273 "config": [ 00:18:13.273 { 00:18:13.273 "method": "bdev_set_options", 00:18:13.273 "params": { 00:18:13.273 "bdev_io_pool_size": 65535, 00:18:13.273 "bdev_io_cache_size": 256, 00:18:13.273 "bdev_auto_examine": true, 00:18:13.273 "iobuf_small_cache_size": 128, 00:18:13.274 "iobuf_large_cache_size": 16 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "bdev_raid_set_options", 00:18:13.274 "params": { 00:18:13.274 "process_window_size_kb": 1024, 00:18:13.274 "process_max_bandwidth_mb_sec": 0 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "bdev_iscsi_set_options", 00:18:13.274 "params": { 00:18:13.274 "timeout_sec": 30 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "bdev_nvme_set_options", 00:18:13.274 "params": { 00:18:13.274 "action_on_timeout": "none", 00:18:13.274 "timeout_us": 0, 00:18:13.274 "timeout_admin_us": 0, 00:18:13.274 "keep_alive_timeout_ms": 10000, 00:18:13.274 "arbitration_burst": 0, 00:18:13.274 "low_priority_weight": 0, 00:18:13.274 "medium_priority_weight": 0, 00:18:13.274 "high_priority_weight": 0, 00:18:13.274 "nvme_adminq_poll_period_us": 10000, 00:18:13.274 "nvme_ioq_poll_period_us": 0, 00:18:13.274 "io_queue_requests": 0, 00:18:13.274 "delay_cmd_submit": true, 00:18:13.274 "transport_retry_count": 4, 00:18:13.274 "bdev_retry_count": 3, 00:18:13.274 "transport_ack_timeout": 0, 00:18:13.274 "ctrlr_loss_timeout_sec": 0, 00:18:13.274 "reconnect_delay_sec": 0, 00:18:13.274 "fast_io_fail_timeout_sec": 0, 00:18:13.274 "disable_auto_failback": false, 00:18:13.274 "generate_uuids": false, 00:18:13.274 "transport_tos": 0, 00:18:13.274 "nvme_error_stat": false, 00:18:13.274 "rdma_srq_size": 0, 00:18:13.274 "io_path_stat": false, 00:18:13.274 "allow_accel_sequence": false, 00:18:13.274 "rdma_max_cq_size": 0, 00:18:13.274 "rdma_cm_event_timeout_ms": 0, 00:18:13.274 "dhchap_digests": [ 00:18:13.274 "sha256", 00:18:13.274 "sha384", 00:18:13.274 "sha512" 00:18:13.274 ], 00:18:13.274 "dhchap_dhgroups": [ 00:18:13.274 "null", 00:18:13.274 "ffdhe2048", 00:18:13.274 "ffdhe3072", 00:18:13.274 "ffdhe4096", 00:18:13.274 "ffdhe6144", 00:18:13.274 "ffdhe8192" 00:18:13.274 ], 00:18:13.274 "rdma_umr_per_io": false 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "bdev_nvme_set_hotplug", 00:18:13.274 "params": { 00:18:13.274 "period_us": 100000, 00:18:13.274 "enable": false 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "bdev_malloc_create", 00:18:13.274 "params": { 00:18:13.274 "name": "malloc0", 00:18:13.274 "num_blocks": 8192, 00:18:13.274 "block_size": 4096, 00:18:13.274 "physical_block_size": 4096, 00:18:13.274 "uuid": "46465411-c2e7-492e-a4fb-bee2d6e53a4c", 00:18:13.274 "optimal_io_boundary": 0, 00:18:13.274 "md_size": 0, 00:18:13.274 "dif_type": 0, 00:18:13.274 "dif_is_head_of_md": false, 00:18:13.274 "dif_pi_format": 0 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "bdev_wait_for_examine" 00:18:13.274 } 00:18:13.274 ] 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "subsystem": "nbd", 00:18:13.274 "config": [] 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "subsystem": "scheduler", 00:18:13.274 "config": [ 00:18:13.274 { 00:18:13.274 "method": "framework_set_scheduler", 00:18:13.274 "params": { 00:18:13.274 "name": "static" 00:18:13.274 } 00:18:13.274 } 00:18:13.274 ] 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "subsystem": "nvmf", 00:18:13.274 "config": [ 00:18:13.274 { 00:18:13.274 "method": "nvmf_set_config", 00:18:13.274 "params": { 00:18:13.274 "discovery_filter": "match_any", 00:18:13.274 "admin_cmd_passthru": { 00:18:13.274 "identify_ctrlr": false 00:18:13.274 }, 00:18:13.274 "dhchap_digests": [ 00:18:13.274 "sha256", 00:18:13.274 "sha384", 00:18:13.274 "sha512" 00:18:13.274 ], 00:18:13.274 "dhchap_dhgroups": [ 00:18:13.274 "null", 00:18:13.274 "ffdhe2048", 00:18:13.274 "ffdhe3072", 00:18:13.274 "ffdhe4096", 00:18:13.274 "ffdhe6144", 00:18:13.274 "ffdhe8192" 00:18:13.274 ] 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "nvmf_set_max_subsystems", 00:18:13.274 "params": { 00:18:13.274 "max_subsystems": 1024 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "nvmf_set_crdt", 00:18:13.274 "params": { 00:18:13.274 "crdt1": 0, 00:18:13.274 "crdt2": 0, 00:18:13.274 "crdt3": 0 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "nvmf_create_transport", 00:18:13.274 "params": { 00:18:13.274 "trtype": "TCP", 00:18:13.274 "max_queue_depth": 128, 00:18:13.274 "max_io_qpairs_per_ctrlr": 127, 00:18:13.274 "in_capsule_data_size": 4096, 00:18:13.274 "max_io_size": 131072, 00:18:13.274 "io_unit_size": 131072, 00:18:13.274 "max_aq_depth": 128, 00:18:13.274 "num_shared_buffers": 511, 00:18:13.274 "buf_cache_size": 4294967295, 00:18:13.274 "dif_insert_or_strip": false, 00:18:13.274 "zcopy": false, 00:18:13.274 "c2h_success": false, 00:18:13.274 "sock_priority": 0, 00:18:13.274 "abort_timeout_sec": 1, 00:18:13.274 "ack_timeout": 0, 00:18:13.274 "data_wr_pool_size": 0 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "nvmf_create_subsystem", 00:18:13.274 "params": { 00:18:13.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.274 "allow_any_host": false, 00:18:13.274 "serial_number": "00000000000000000000", 00:18:13.274 "model_number": "SPDK bdev Controller", 00:18:13.274 "max_namespaces": 32, 00:18:13.274 "min_cntlid": 1, 00:18:13.274 "max_cntlid": 65519, 00:18:13.274 "ana_reporting": false 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "nvmf_subsystem_add_host", 00:18:13.274 "params": { 00:18:13.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.274 "host": "nqn.2016-06.io.spdk:host1", 00:18:13.274 "psk": "key0" 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "nvmf_subsystem_add_ns", 00:18:13.274 "params": { 00:18:13.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.274 "namespace": { 00:18:13.274 "nsid": 1, 00:18:13.274 "bdev_name": "malloc0", 00:18:13.274 "nguid": "46465411C2E7492EA4FBBEE2D6E53A4C", 00:18:13.274 "uuid": "46465411-c2e7-492e-a4fb-bee2d6e53a4c", 00:18:13.274 "no_auto_visible": false 00:18:13.274 } 00:18:13.274 } 00:18:13.274 }, 00:18:13.274 { 00:18:13.274 "method": "nvmf_subsystem_add_listener", 00:18:13.274 "params": { 00:18:13.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.274 "listen_address": { 00:18:13.274 "trtype": "TCP", 00:18:13.274 "adrfam": "IPv4", 00:18:13.274 "traddr": "10.0.0.3", 00:18:13.274 "trsvcid": "4420" 00:18:13.274 }, 00:18:13.274 "secure_channel": false, 00:18:13.274 "sock_impl": "ssl" 00:18:13.274 } 00:18:13.274 } 00:18:13.274 ] 00:18:13.274 } 00:18:13.274 ] 00:18:13.274 }' 00:18:13.274 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:13.534 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:13.534 "subsystems": [ 00:18:13.534 { 00:18:13.534 "subsystem": "keyring", 00:18:13.534 "config": [ 00:18:13.534 { 00:18:13.534 "method": "keyring_file_add_key", 00:18:13.534 "params": { 00:18:13.534 "name": "key0", 00:18:13.534 "path": "/tmp/tmp.EcOWTTjkkG" 00:18:13.534 } 00:18:13.534 } 00:18:13.534 ] 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "subsystem": "iobuf", 00:18:13.534 "config": [ 00:18:13.534 { 00:18:13.534 "method": "iobuf_set_options", 00:18:13.534 "params": { 00:18:13.534 "small_pool_count": 8192, 00:18:13.534 "large_pool_count": 1024, 00:18:13.534 "small_bufsize": 8192, 00:18:13.534 "large_bufsize": 135168, 00:18:13.534 "enable_numa": false 00:18:13.534 } 00:18:13.534 } 00:18:13.534 ] 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "subsystem": "sock", 00:18:13.534 "config": [ 00:18:13.534 { 00:18:13.534 "method": "sock_set_default_impl", 00:18:13.534 "params": { 00:18:13.534 "impl_name": "uring" 00:18:13.534 } 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "method": "sock_impl_set_options", 00:18:13.534 "params": { 00:18:13.534 "impl_name": "ssl", 00:18:13.534 "recv_buf_size": 4096, 00:18:13.534 "send_buf_size": 4096, 00:18:13.534 "enable_recv_pipe": true, 00:18:13.534 "enable_quickack": false, 00:18:13.534 "enable_placement_id": 0, 00:18:13.534 "enable_zerocopy_send_server": true, 00:18:13.534 "enable_zerocopy_send_client": false, 00:18:13.534 "zerocopy_threshold": 0, 00:18:13.534 "tls_version": 0, 00:18:13.534 "enable_ktls": false 00:18:13.534 } 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "method": "sock_impl_set_options", 00:18:13.534 "params": { 00:18:13.534 "impl_name": "posix", 00:18:13.534 "recv_buf_size": 2097152, 00:18:13.534 "send_buf_size": 2097152, 00:18:13.534 "enable_recv_pipe": true, 00:18:13.534 "enable_quickack": false, 00:18:13.534 "enable_placement_id": 0, 00:18:13.534 "enable_zerocopy_send_server": true, 00:18:13.534 "enable_zerocopy_send_client": false, 00:18:13.534 "zerocopy_threshold": 0, 00:18:13.534 "tls_version": 0, 00:18:13.534 "enable_ktls": false 00:18:13.534 } 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "method": "sock_impl_set_options", 00:18:13.534 "params": { 00:18:13.534 "impl_name": "uring", 00:18:13.534 "recv_buf_size": 2097152, 00:18:13.534 "send_buf_size": 2097152, 00:18:13.534 "enable_recv_pipe": true, 00:18:13.534 "enable_quickack": false, 00:18:13.534 "enable_placement_id": 0, 00:18:13.534 "enable_zerocopy_send_server": false, 00:18:13.534 "enable_zerocopy_send_client": false, 00:18:13.534 "zerocopy_threshold": 0, 00:18:13.534 "tls_version": 0, 00:18:13.534 "enable_ktls": false 00:18:13.534 } 00:18:13.534 } 00:18:13.534 ] 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "subsystem": "vmd", 00:18:13.534 "config": [] 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "subsystem": "accel", 00:18:13.534 "config": [ 00:18:13.534 { 00:18:13.534 "method": "accel_set_options", 00:18:13.534 "params": { 00:18:13.534 "small_cache_size": 128, 00:18:13.534 "large_cache_size": 16, 00:18:13.534 "task_count": 2048, 00:18:13.534 "sequence_count": 2048, 00:18:13.534 "buf_count": 2048 00:18:13.534 } 00:18:13.534 } 00:18:13.534 ] 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "subsystem": "bdev", 00:18:13.534 "config": [ 00:18:13.534 { 00:18:13.534 "method": "bdev_set_options", 00:18:13.534 "params": { 00:18:13.534 "bdev_io_pool_size": 65535, 00:18:13.534 "bdev_io_cache_size": 256, 00:18:13.534 "bdev_auto_examine": true, 00:18:13.534 "iobuf_small_cache_size": 128, 00:18:13.534 "iobuf_large_cache_size": 16 00:18:13.534 } 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "method": "bdev_raid_set_options", 00:18:13.534 "params": { 00:18:13.534 "process_window_size_kb": 1024, 00:18:13.534 "process_max_bandwidth_mb_sec": 0 00:18:13.534 } 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "method": "bdev_iscsi_set_options", 00:18:13.534 "params": { 00:18:13.534 "timeout_sec": 30 00:18:13.534 } 00:18:13.534 }, 00:18:13.534 { 00:18:13.534 "method": "bdev_nvme_set_options", 00:18:13.534 "params": { 00:18:13.534 "action_on_timeout": "none", 00:18:13.534 "timeout_us": 0, 00:18:13.534 "timeout_admin_us": 0, 00:18:13.534 "keep_alive_timeout_ms": 10000, 00:18:13.534 "arbitration_burst": 0, 00:18:13.534 "low_priority_weight": 0, 00:18:13.535 "medium_priority_weight": 0, 00:18:13.535 "high_priority_weight": 0, 00:18:13.535 "nvme_adminq_poll_period_us": 10000, 00:18:13.535 "nvme_ioq_poll_period_us": 0, 00:18:13.535 "io_queue_requests": 512, 00:18:13.535 "delay_cmd_submit": true, 00:18:13.535 "transport_retry_count": 4, 00:18:13.535 "bdev_retry_count": 3, 00:18:13.535 "transport_ack_timeout": 0, 00:18:13.535 "ctrlr_loss_timeout_sec": 0, 00:18:13.535 "reconnect_delay_sec": 0, 00:18:13.535 "fast_io_fail_timeout_sec": 0, 00:18:13.535 "disable_auto_failback": false, 00:18:13.535 "generate_uuids": false, 00:18:13.535 "transport_tos": 0, 00:18:13.535 "nvme_error_stat": false, 00:18:13.535 "rdma_srq_size": 0, 00:18:13.535 "io_path_stat": false, 00:18:13.535 "allow_accel_sequence": false, 00:18:13.535 "rdma_max_cq_size": 0, 00:18:13.535 "rdma_cm_event_timeout_ms": 0, 00:18:13.535 "dhchap_digests": [ 00:18:13.535 "sha256", 00:18:13.535 "sha384", 00:18:13.535 "sha512" 00:18:13.535 ], 00:18:13.535 "dhchap_dhgroups": [ 00:18:13.535 "null", 00:18:13.535 "ffdhe2048", 00:18:13.535 "ffdhe3072", 00:18:13.535 "ffdhe4096", 00:18:13.535 "ffdhe6144", 00:18:13.535 "ffdhe8192" 00:18:13.535 ], 00:18:13.535 "rdma_umr_per_io": false 00:18:13.535 } 00:18:13.535 }, 00:18:13.535 { 00:18:13.535 "method": "bdev_nvme_attach_controller", 00:18:13.535 "params": { 00:18:13.535 "name": "nvme0", 00:18:13.535 "trtype": "TCP", 00:18:13.535 "adrfam": "IPv4", 00:18:13.535 "traddr": "10.0.0.3", 00:18:13.535 "trsvcid": "4420", 00:18:13.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.535 "prchk_reftag": false, 00:18:13.535 "prchk_guard": false, 00:18:13.535 "ctrlr_loss_timeout_sec": 0, 00:18:13.535 "reconnect_delay_sec": 0, 00:18:13.535 "fast_io_fail_timeout_sec": 0, 00:18:13.535 "psk": "key0", 00:18:13.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.535 "hdgst": false, 00:18:13.535 "ddgst": false, 00:18:13.535 "multipath": "multipath" 00:18:13.535 } 00:18:13.535 }, 00:18:13.535 { 00:18:13.535 "method": "bdev_nvme_set_hotplug", 00:18:13.535 "params": { 00:18:13.535 "period_us": 100000, 00:18:13.535 "enable": false 00:18:13.535 } 00:18:13.535 }, 00:18:13.535 { 00:18:13.535 "method": "bdev_enable_histogram", 00:18:13.535 "params": { 00:18:13.535 "name": "nvme0n1", 00:18:13.535 "enable": true 00:18:13.535 } 00:18:13.535 }, 00:18:13.535 { 00:18:13.535 "method": "bdev_wait_for_examine" 00:18:13.535 } 00:18:13.535 ] 00:18:13.535 }, 00:18:13.535 { 00:18:13.535 "subsystem": "nbd", 00:18:13.535 "config": [] 00:18:13.535 } 00:18:13.535 ] 00:18:13.535 }' 00:18:13.535 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 77093 00:18:13.535 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77093 ']' 00:18:13.535 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77093 00:18:13.535 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.535 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.535 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77093 00:18:13.795 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:13.795 killing process with pid 77093 00:18:13.795 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:13.795 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77093' 00:18:13.795 Received shutdown signal, test time was about 1.000000 seconds 00:18:13.795 00:18:13.795 Latency(us) 00:18:13.795 [2024-12-13T09:20:07.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.795 [2024-12-13T09:20:07.685Z] =================================================================================================================== 00:18:13.795 [2024-12-13T09:20:07.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.795 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77093 00:18:13.795 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77093 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 77061 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77061 ']' 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77061 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77061 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.760 killing process with pid 77061 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77061' 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77061 00:18:14.760 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77061 00:18:15.698 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:15.698 "subsystems": [ 00:18:15.698 { 00:18:15.698 "subsystem": "keyring", 00:18:15.698 "config": [ 00:18:15.698 { 00:18:15.698 "method": "keyring_file_add_key", 00:18:15.698 "params": { 00:18:15.698 "name": "key0", 00:18:15.698 "path": "/tmp/tmp.EcOWTTjkkG" 00:18:15.698 } 00:18:15.698 } 00:18:15.698 ] 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "subsystem": "iobuf", 00:18:15.698 "config": [ 00:18:15.698 { 00:18:15.698 "method": "iobuf_set_options", 00:18:15.698 "params": { 00:18:15.698 "small_pool_count": 8192, 00:18:15.698 "large_pool_count": 1024, 00:18:15.698 "small_bufsize": 8192, 00:18:15.698 "large_bufsize": 135168, 00:18:15.698 "enable_numa": false 00:18:15.698 } 00:18:15.698 } 00:18:15.698 ] 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "subsystem": "sock", 00:18:15.698 "config": [ 00:18:15.698 { 00:18:15.698 "method": "sock_set_default_impl", 00:18:15.698 "params": { 00:18:15.698 "impl_name": "uring" 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "sock_impl_set_options", 00:18:15.698 "params": { 00:18:15.698 "impl_name": "ssl", 00:18:15.698 "recv_buf_size": 4096, 00:18:15.698 "send_buf_size": 4096, 00:18:15.698 "enable_recv_pipe": true, 00:18:15.698 "enable_quickack": false, 00:18:15.698 "enable_placement_id": 0, 00:18:15.698 "enable_zerocopy_send_server": true, 00:18:15.698 "enable_zerocopy_send_client": false, 00:18:15.698 "zerocopy_threshold": 0, 00:18:15.698 "tls_version": 0, 00:18:15.698 "enable_ktls": false 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "sock_impl_set_options", 00:18:15.698 "params": { 00:18:15.698 "impl_name": "posix", 00:18:15.698 "recv_buf_size": 2097152, 00:18:15.698 "send_buf_size": 2097152, 00:18:15.698 "enable_recv_pipe": true, 00:18:15.698 "enable_quickack": false, 00:18:15.698 "enable_placement_id": 0, 00:18:15.698 "enable_zerocopy_send_server": true, 00:18:15.698 "enable_zerocopy_send_client": false, 00:18:15.698 "zerocopy_threshold": 0, 00:18:15.698 "tls_version": 0, 00:18:15.698 "enable_ktls": false 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "sock_impl_set_options", 00:18:15.698 "params": { 00:18:15.698 "impl_name": "uring", 00:18:15.698 "recv_buf_size": 2097152, 00:18:15.698 "send_buf_size": 2097152, 00:18:15.698 "enable_recv_pipe": true, 00:18:15.698 "enable_quickack": false, 00:18:15.698 "enable_placement_id": 0, 00:18:15.698 "enable_zerocopy_send_server": false, 00:18:15.698 "enable_zerocopy_send_client": false, 00:18:15.698 "zerocopy_threshold": 0, 00:18:15.698 "tls_version": 0, 00:18:15.698 "enable_ktls": false 00:18:15.698 } 00:18:15.698 } 00:18:15.698 ] 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "subsystem": "vmd", 00:18:15.698 "config": [] 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "subsystem": "accel", 00:18:15.698 "config": [ 00:18:15.698 { 00:18:15.698 "method": "accel_set_options", 00:18:15.698 "params": { 00:18:15.698 "small_cache_size": 128, 00:18:15.698 "large_cache_size": 16, 00:18:15.698 "task_count": 2048, 00:18:15.698 "sequence_count": 2048, 00:18:15.698 "buf_count": 2048 00:18:15.698 } 00:18:15.698 } 00:18:15.698 ] 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "subsystem": "bdev", 00:18:15.698 "config": [ 00:18:15.698 { 00:18:15.698 "method": "bdev_set_options", 00:18:15.698 "params": { 00:18:15.698 "bdev_io_pool_size": 65535, 00:18:15.698 "bdev_io_cache_size": 256, 00:18:15.698 "bdev_auto_examine": true, 00:18:15.698 "iobuf_small_cache_size": 128, 00:18:15.698 "iobuf_large_cache_size": 16 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "bdev_raid_set_options", 00:18:15.698 "params": { 00:18:15.698 "process_window_size_kb": 1024, 00:18:15.698 "process_max_bandwidth_mb_sec": 0 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "bdev_iscsi_set_options", 00:18:15.698 "params": { 00:18:15.698 "timeout_sec": 30 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "bdev_nvme_set_options", 00:18:15.698 "params": { 00:18:15.698 "action_on_timeout": "none", 00:18:15.698 "timeout_us": 0, 00:18:15.698 "timeout_admin_us": 0, 00:18:15.698 "keep_alive_timeout_ms": 10000, 00:18:15.698 "arbitration_burst": 0, 00:18:15.698 "low_priority_weight": 0, 00:18:15.698 "medium_priority_weight": 0, 00:18:15.698 "high_priority_weight": 0, 00:18:15.698 "nvme_adminq_poll_period_us": 10000, 00:18:15.698 "nvme_ioq_poll_period_us": 0, 00:18:15.698 "io_queue_requests": 0, 00:18:15.698 "delay_cmd_submit": true, 00:18:15.698 "transport_retry_count": 4, 00:18:15.698 "bdev_retry_count": 3, 00:18:15.698 "transport_ack_timeout": 0, 00:18:15.698 "ctrlr_loss_timeout_sec": 0, 00:18:15.698 "reconnect_delay_sec": 0, 00:18:15.698 "fast_io_fail_timeout_sec": 0, 00:18:15.698 "disable_auto_failback": false, 00:18:15.698 "generate_uuids": false, 00:18:15.698 "transport_tos": 0, 00:18:15.698 "nvme_error_stat": false, 00:18:15.698 "rdma_srq_size": 0, 00:18:15.698 "io_path_stat": false, 00:18:15.698 "allow_accel_sequence": false, 00:18:15.698 "rdma_max_cq_size": 0, 00:18:15.698 "rdma_cm_event_timeout_ms": 0, 00:18:15.698 "dhchap_digests": [ 00:18:15.698 "sha256", 00:18:15.698 "sha384", 00:18:15.698 "sha512" 00:18:15.698 ], 00:18:15.698 "dhchap_dhgroups": [ 00:18:15.698 "null", 00:18:15.698 "ffdhe2048", 00:18:15.698 "ffdhe3072", 00:18:15.698 "ffdhe4096", 00:18:15.698 "ffdhe6144", 00:18:15.698 "ffdhe8192" 00:18:15.698 ], 00:18:15.698 "rdma_umr_per_io": false 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "bdev_nvme_set_hotplug", 00:18:15.698 "params": { 00:18:15.698 "period_us": 100000, 00:18:15.698 "enable": false 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "bdev_malloc_create", 00:18:15.698 "params": { 00:18:15.698 "name": "malloc0", 00:18:15.698 "num_blocks": 8192, 00:18:15.698 "block_size": 4096, 00:18:15.698 "physical_block_size": 4096, 00:18:15.698 "uuid": "46465411-c2e7-492e-a4fb-bee2d6e53a4c", 00:18:15.698 "optimal_io_boundary": 0, 00:18:15.698 "md_size": 0, 00:18:15.698 "dif_type": 0, 00:18:15.698 "dif_is_head_of_md": false, 00:18:15.698 "dif_pi_format": 0 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "bdev_wait_for_examine" 00:18:15.698 } 00:18:15.698 ] 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "subsystem": "nbd", 00:18:15.698 "config": [] 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "subsystem": "scheduler", 00:18:15.698 "config": [ 00:18:15.698 { 00:18:15.698 "method": "framework_set_scheduler", 00:18:15.698 "params": { 00:18:15.698 "name": "static" 00:18:15.698 } 00:18:15.698 } 00:18:15.698 ] 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "subsystem": "nvmf", 00:18:15.698 "config": [ 00:18:15.698 { 00:18:15.698 "method": "nvmf_set_config", 00:18:15.698 "params": { 00:18:15.698 "discovery_filter": "match_any", 00:18:15.698 "admin_cmd_passthru": { 00:18:15.698 "identify_ctrlr": false 00:18:15.698 }, 00:18:15.698 "dhchap_digests": [ 00:18:15.698 "sha256", 00:18:15.698 "sha384", 00:18:15.698 "sha512" 00:18:15.698 ], 00:18:15.698 "dhchap_dhgroups": [ 00:18:15.698 "null", 00:18:15.698 "ffdhe2048", 00:18:15.698 "ffdhe3072", 00:18:15.698 "ffdhe4096", 00:18:15.698 "ffdhe6144", 00:18:15.698 "ffdhe8192" 00:18:15.698 ] 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "nvmf_set_max_subsystems", 00:18:15.698 "params": { 00:18:15.698 "max_subsystems": 1024 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "nvmf_set_crdt", 00:18:15.698 "params": { 00:18:15.698 "crdt1": 0, 00:18:15.698 "crdt2": 0, 00:18:15.698 "crdt3": 0 00:18:15.698 } 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "method": "nvmf_create_transport", 00:18:15.698 "params": { 00:18:15.698 "trtype": "TCP", 00:18:15.698 "max_queue_depth": 128, 00:18:15.698 "max_io_qpairs_per_ctrlr": 127, 00:18:15.699 "in_capsule_data_size": 4096, 00:18:15.699 "max_io_size": 131072, 00:18:15.699 "io_unit_size": 131072, 00:18:15.699 "max_aq_depth": 128, 00:18:15.699 "num_shared_buffers": 511, 00:18:15.699 "buf_cache_size": 4294967295, 00:18:15.699 "dif_insert_or_strip": false, 00:18:15.699 "zcopy": false, 00:18:15.699 "c2h_success": false, 00:18:15.699 "sock_priority": 0, 00:18:15.699 "abort_timeout_sec": 1, 00:18:15.699 "ack_timeout": 0, 00:18:15.699 "data_wr_pool_size": 0 00:18:15.699 } 00:18:15.699 }, 00:18:15.699 { 00:18:15.699 "method": "nvmf_create_subsystem", 00:18:15.699 "params": { 00:18:15.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.699 "allow_any_host": false, 00:18:15.699 "serial_number": "00000000000000000000", 00:18:15.699 "model_number": "SPDK bdev Controller", 00:18:15.699 "max_namespaces": 32, 00:18:15.699 "min_cntlid": 1, 00:18:15.699 "max_cntlid": 65519, 00:18:15.699 "ana_reporting": false 00:18:15.699 } 00:18:15.699 }, 00:18:15.699 { 00:18:15.699 "method": "nvmf_subsystem_add_host", 00:18:15.699 "params": { 00:18:15.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.699 "host": "nqn.2016-06.io.spdk:host1", 00:18:15.699 "psk": "key0" 00:18:15.699 } 00:18:15.699 }, 00:18:15.699 { 00:18:15.699 "method": "nvmf_subsystem_add_ns", 00:18:15.699 "params": { 00:18:15.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.699 "namespace": { 00:18:15.699 "nsid": 1, 00:18:15.699 "bdev_name": "malloc0", 00:18:15.699 "nguid": "46465411C2E7492EA4FBBEE2D6E53A4C", 00:18:15.699 "uuid": "46465411-c2e7-492e-a4fb-bee2d6e53a4c", 00:18:15.699 "no_auto_visible": false 00:18:15.699 } 00:18:15.699 } 00:18:15.699 }, 00:18:15.699 { 00:18:15.699 "method": "nvmf_subsystem_add_listener", 00:18:15.699 "params": { 00:18:15.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.699 "listen_address": { 00:18:15.699 "trtype": "TCP", 00:18:15.699 "adrfam": "IPv4", 00:18:15.699 "traddr": "10.0.0.3", 00:18:15.699 "trsvcid": "4420" 00:18:15.699 }, 00:18:15.699 "secure_channel": false, 00:18:15.699 "sock_impl": "ssl" 00:18:15.699 } 00:18:15.699 } 00:18:15.699 ] 00:18:15.699 } 00:18:15.699 ] 00:18:15.699 }' 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=77168 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 77168 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77168 ']' 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.699 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.699 [2024-12-13 09:20:09.408263] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:15.699 [2024-12-13 09:20:09.409390] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.958 [2024-12-13 09:20:09.591223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.958 [2024-12-13 09:20:09.675923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.958 [2024-12-13 09:20:09.676000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.958 [2024-12-13 09:20:09.676034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.958 [2024-12-13 09:20:09.676056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.958 [2024-12-13 09:20:09.676070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.958 [2024-12-13 09:20:09.677215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.217 [2024-12-13 09:20:09.942568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:16.476 [2024-12-13 09:20:10.110213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.476 [2024-12-13 09:20:10.142159] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.476 [2024-12-13 09:20:10.142453] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:16.476 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.476 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:16.476 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.476 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.477 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=77200 00:18:16.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 77200 /var/tmp/bdevperf.sock 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77200 ']' 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.736 09:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:16.736 "subsystems": [ 00:18:16.736 { 00:18:16.736 "subsystem": "keyring", 00:18:16.736 "config": [ 00:18:16.736 { 00:18:16.736 "method": "keyring_file_add_key", 00:18:16.736 "params": { 00:18:16.736 "name": "key0", 00:18:16.736 "path": "/tmp/tmp.EcOWTTjkkG" 00:18:16.736 } 00:18:16.736 } 00:18:16.736 ] 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "subsystem": "iobuf", 00:18:16.736 "config": [ 00:18:16.736 { 00:18:16.736 "method": "iobuf_set_options", 00:18:16.736 "params": { 00:18:16.736 "small_pool_count": 8192, 00:18:16.736 "large_pool_count": 1024, 00:18:16.736 "small_bufsize": 8192, 00:18:16.736 "large_bufsize": 135168, 00:18:16.736 "enable_numa": false 00:18:16.736 } 00:18:16.736 } 00:18:16.736 ] 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "subsystem": "sock", 00:18:16.736 "config": [ 00:18:16.736 { 00:18:16.736 "method": "sock_set_default_impl", 00:18:16.736 "params": { 00:18:16.736 "impl_name": "uring" 00:18:16.736 } 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "method": "sock_impl_set_options", 00:18:16.736 "params": { 00:18:16.736 "impl_name": "ssl", 00:18:16.736 "recv_buf_size": 4096, 00:18:16.736 "send_buf_size": 4096, 00:18:16.736 "enable_recv_pipe": true, 00:18:16.736 "enable_quickack": false, 00:18:16.736 "enable_placement_id": 0, 00:18:16.736 "enable_zerocopy_send_server": true, 00:18:16.736 "enable_zerocopy_send_client": false, 00:18:16.736 "zerocopy_threshold": 0, 00:18:16.736 "tls_version": 0, 00:18:16.736 "enable_ktls": false 00:18:16.736 } 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "method": "sock_impl_set_options", 00:18:16.736 "params": { 00:18:16.736 "impl_name": "posix", 00:18:16.736 "recv_buf_size": 2097152, 00:18:16.736 "send_buf_size": 2097152, 00:18:16.736 "enable_recv_pipe": true, 00:18:16.736 "enable_quickack": false, 00:18:16.736 "enable_placement_id": 0, 00:18:16.736 "enable_zerocopy_send_server": true, 00:18:16.736 "enable_zerocopy_send_client": false, 00:18:16.736 "zerocopy_threshold": 0, 00:18:16.736 "tls_version": 0, 00:18:16.736 "enable_ktls": false 00:18:16.736 } 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "method": "sock_impl_set_options", 00:18:16.736 "params": { 00:18:16.736 "impl_name": "uring", 00:18:16.736 "recv_buf_size": 2097152, 00:18:16.736 "send_buf_size": 2097152, 00:18:16.736 "enable_recv_pipe": true, 00:18:16.736 "enable_quickack": false, 00:18:16.736 "enable_placement_id": 0, 00:18:16.736 "enable_zerocopy_send_server": false, 00:18:16.736 "enable_zerocopy_send_client": false, 00:18:16.736 "zerocopy_threshold": 0, 00:18:16.736 "tls_version": 0, 00:18:16.736 "enable_ktls": false 00:18:16.736 } 00:18:16.736 } 00:18:16.736 ] 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "subsystem": "vmd", 00:18:16.736 "config": [] 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "subsystem": "accel", 00:18:16.736 "config": [ 00:18:16.736 { 00:18:16.736 "method": "accel_set_options", 00:18:16.736 "params": { 00:18:16.736 "small_cache_size": 128, 00:18:16.736 "large_cache_size": 16, 00:18:16.736 "task_count": 2048, 00:18:16.736 "sequence_count": 2048, 00:18:16.736 "buf_count": 2048 00:18:16.736 } 00:18:16.736 } 00:18:16.736 ] 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "subsystem": "bdev", 00:18:16.736 "config": [ 00:18:16.736 { 00:18:16.736 "method": "bdev_set_options", 00:18:16.736 "params": { 00:18:16.736 "bdev_io_pool_size": 65535, 00:18:16.736 "bdev_io_cache_size": 256, 00:18:16.736 "bdev_auto_examine": true, 00:18:16.736 "iobuf_small_cache_size": 128, 00:18:16.736 "iobuf_large_cache_size": 16 00:18:16.736 } 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "method": "bdev_raid_set_options", 00:18:16.736 "params": { 00:18:16.736 "process_window_size_kb": 1024, 00:18:16.736 "process_max_bandwidth_mb_sec": 0 00:18:16.736 } 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "method": "bdev_iscsi_set_options", 00:18:16.736 "params": { 00:18:16.736 "timeout_sec": 30 00:18:16.736 } 00:18:16.736 }, 00:18:16.736 { 00:18:16.736 "method": "bdev_nvme_set_options", 00:18:16.736 "params": { 00:18:16.736 "action_on_timeout": "none", 00:18:16.736 "timeout_us": 0, 00:18:16.736 "timeout_admin_us": 0, 00:18:16.736 "keep_alive_timeout_ms": 10000, 00:18:16.736 "arbitration_burst": 0, 00:18:16.736 "low_priority_weight": 0, 00:18:16.736 "medium_priority_weight": 0, 00:18:16.737 "high_priority_weight": 0, 00:18:16.737 "nvme_adminq_poll_period_us": 10000, 00:18:16.737 "nvme_ioq_poll_period_us": 0, 00:18:16.737 "io_queue_requests": 512, 00:18:16.737 "delay_cmd_submit": true, 00:18:16.737 "transport_retry_count": 4, 00:18:16.737 "bdev_retry_count": 3, 00:18:16.737 "transport_ack_timeout": 0, 00:18:16.737 "ctrlr_loss_timeout_sec": 0, 00:18:16.737 "reconnect_delay_sec": 0, 00:18:16.737 "fast_io_fail_timeout_sec": 0, 00:18:16.737 "disable_auto_failback": false, 00:18:16.737 "generate_uuids": false, 00:18:16.737 "transport_tos": 0, 00:18:16.737 "nvme_error_stat": false, 00:18:16.737 "rdma_srq_size": 0, 00:18:16.737 "io_path_stat": false, 00:18:16.737 "allow_accel_sequence": false, 00:18:16.737 "rdma_max_cq_size": 0, 00:18:16.737 "rdma_cm_event_timeout_ms": 0, 00:18:16.737 "dhchap_digests": [ 00:18:16.737 "sha256", 00:18:16.737 "sha384", 00:18:16.737 "sha512" 00:18:16.737 ], 00:18:16.737 "dhchap_dhgroups": [ 00:18:16.737 "null", 00:18:16.737 "ffdhe2048", 00:18:16.737 "ffdhe3072", 00:18:16.737 "ffdhe4096", 00:18:16.737 "ffdhe6144", 00:18:16.737 "ffdhe8192" 00:18:16.737 ], 00:18:16.737 "rdma_umr_per_io": false 00:18:16.737 } 00:18:16.737 }, 00:18:16.737 { 00:18:16.737 "method": "bdev_nvme_attach_controller", 00:18:16.737 "params": { 00:18:16.737 "name": "nvme0", 00:18:16.737 "trtype": "TCP", 00:18:16.737 "adrfam": "IPv4", 00:18:16.737 "traddr": "10.0.0.3", 00:18:16.737 "trsvcid": "4420", 00:18:16.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.737 "prchk_reftag": false, 00:18:16.737 "prchk_guard": false, 00:18:16.737 "ctrlr_loss_timeout_sec": 0, 00:18:16.737 "reconnect_delay_sec": 0, 00:18:16.737 "fast_io_fail_timeout_sec": 0, 00:18:16.737 "psk": "key0", 00:18:16.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.737 "hdgst": false, 00:18:16.737 "ddgst": false, 00:18:16.737 "multipath": "multipath" 00:18:16.737 } 00:18:16.737 }, 00:18:16.737 { 00:18:16.737 "method": "bdev_nvme_set_hotplug", 00:18:16.737 "params": { 00:18:16.737 "period_us": 100000, 00:18:16.737 "enable": false 00:18:16.737 } 00:18:16.737 }, 00:18:16.737 { 00:18:16.737 "method": "bdev_enable_histogram", 00:18:16.737 "params": { 00:18:16.737 "name": "nvme0n1", 00:18:16.737 "enable": true 00:18:16.737 } 00:18:16.737 }, 00:18:16.737 { 00:18:16.737 "method": "bdev_wait_for_examine" 00:18:16.737 } 00:18:16.737 ] 00:18:16.737 }, 00:18:16.737 { 00:18:16.737 "subsystem": "nbd", 00:18:16.737 "config": [] 00:18:16.737 } 00:18:16.737 ] 00:18:16.737 }' 00:18:16.737 [2024-12-13 09:20:10.505120] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:16.737 [2024-12-13 09:20:10.505363] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77200 ] 00:18:16.999 [2024-12-13 09:20:10.687771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.999 [2024-12-13 09:20:10.778469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.258 [2024-12-13 09:20:11.018883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.258 [2024-12-13 09:20:11.124715] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.825 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.825 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:17.825 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:17.826 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:17.826 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.826 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:18.085 Running I/O for 1 seconds... 00:18:19.022 3092.00 IOPS, 12.08 MiB/s 00:18:19.022 Latency(us) 00:18:19.022 [2024-12-13T09:20:12.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.022 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:19.022 Verification LBA range: start 0x0 length 0x2000 00:18:19.022 nvme0n1 : 1.02 3145.65 12.29 0.00 0.00 40096.62 2591.65 28359.21 00:18:19.022 [2024-12-13T09:20:12.912Z] =================================================================================================================== 00:18:19.022 [2024-12-13T09:20:12.912Z] Total : 3145.65 12.29 0.00 0.00 40096.62 2591.65 28359.21 00:18:19.022 { 00:18:19.022 "results": [ 00:18:19.022 { 00:18:19.022 "job": "nvme0n1", 00:18:19.022 "core_mask": "0x2", 00:18:19.022 "workload": "verify", 00:18:19.022 "status": "finished", 00:18:19.022 "verify_range": { 00:18:19.022 "start": 0, 00:18:19.022 "length": 8192 00:18:19.022 }, 00:18:19.022 "queue_depth": 128, 00:18:19.022 "io_size": 4096, 00:18:19.022 "runtime": 1.023635, 00:18:19.022 "iops": 3145.65250308948, 00:18:19.022 "mibps": 12.287705090193281, 00:18:19.022 "io_failed": 0, 00:18:19.022 "io_timeout": 0, 00:18:19.022 "avg_latency_us": 40096.61593224167, 00:18:19.022 "min_latency_us": 2591.650909090909, 00:18:19.022 "max_latency_us": 28359.214545454546 00:18:19.022 } 00:18:19.022 ], 00:18:19.022 "core_count": 1 00:18:19.022 } 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:19.022 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:19.022 nvmf_trace.0 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 77200 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77200 ']' 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77200 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77200 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:19.283 killing process with pid 77200 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77200' 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77200 00:18:19.283 Received shutdown signal, test time was about 1.000000 seconds 00:18:19.283 00:18:19.283 Latency(us) 00:18:19.283 [2024-12-13T09:20:13.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.283 [2024-12-13T09:20:13.173Z] =================================================================================================================== 00:18:19.283 [2024-12-13T09:20:13.173Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.283 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77200 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:20.222 rmmod nvme_tcp 00:18:20.222 rmmod nvme_fabrics 00:18:20.222 rmmod nvme_keyring 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 77168 ']' 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 77168 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77168 ']' 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77168 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77168 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.222 killing process with pid 77168 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77168' 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77168 00:18:20.222 09:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77168 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:21.159 09:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:21.159 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TL7MsODgsU /tmp/tmp.kknbKTOL7D /tmp/tmp.EcOWTTjkkG 00:18:21.422 ************************************ 00:18:21.422 END TEST nvmf_tls 00:18:21.422 ************************************ 00:18:21.422 00:18:21.422 real 1m46.070s 00:18:21.422 user 2m55.703s 00:18:21.422 sys 0m26.650s 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:21.422 ************************************ 00:18:21.422 START TEST nvmf_fips 00:18:21.422 ************************************ 00:18:21.422 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:21.422 * Looking for test storage... 00:18:21.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:21.423 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:21.423 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:18:21.423 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:21.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.682 --rc genhtml_branch_coverage=1 00:18:21.682 --rc genhtml_function_coverage=1 00:18:21.682 --rc genhtml_legend=1 00:18:21.682 --rc geninfo_all_blocks=1 00:18:21.682 --rc geninfo_unexecuted_blocks=1 00:18:21.682 00:18:21.682 ' 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:21.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.682 --rc genhtml_branch_coverage=1 00:18:21.682 --rc genhtml_function_coverage=1 00:18:21.682 --rc genhtml_legend=1 00:18:21.682 --rc geninfo_all_blocks=1 00:18:21.682 --rc geninfo_unexecuted_blocks=1 00:18:21.682 00:18:21.682 ' 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:21.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.682 --rc genhtml_branch_coverage=1 00:18:21.682 --rc genhtml_function_coverage=1 00:18:21.682 --rc genhtml_legend=1 00:18:21.682 --rc geninfo_all_blocks=1 00:18:21.682 --rc geninfo_unexecuted_blocks=1 00:18:21.682 00:18:21.682 ' 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:21.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.682 --rc genhtml_branch_coverage=1 00:18:21.682 --rc genhtml_function_coverage=1 00:18:21.682 --rc genhtml_legend=1 00:18:21.682 --rc geninfo_all_blocks=1 00:18:21.682 --rc geninfo_unexecuted_blocks=1 00:18:21.682 00:18:21.682 ' 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.682 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:21.683 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:21.683 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:21.942 Error setting digest 00:18:21.942 40726ED15F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:21.942 40726ED15F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:21.942 Cannot find device "nvmf_init_br" 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:21.942 Cannot find device "nvmf_init_br2" 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:21.942 Cannot find device "nvmf_tgt_br" 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:18:21.942 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:21.943 Cannot find device "nvmf_tgt_br2" 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:21.943 Cannot find device "nvmf_init_br" 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:21.943 Cannot find device "nvmf_init_br2" 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:21.943 Cannot find device "nvmf_tgt_br" 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:21.943 Cannot find device "nvmf_tgt_br2" 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:21.943 Cannot find device "nvmf_br" 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:21.943 Cannot find device "nvmf_init_if" 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:21.943 Cannot find device "nvmf_init_if2" 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:21.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:21.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:21.943 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:22.202 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:22.202 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:18:22.202 00:18:22.202 --- 10.0.0.3 ping statistics --- 00:18:22.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.202 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:22.202 09:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:22.202 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:22.202 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:18:22.202 00:18:22.202 --- 10.0.0.4 ping statistics --- 00:18:22.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.202 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:22.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:22.203 00:18:22.203 --- 10.0.0.1 ping statistics --- 00:18:22.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.203 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:22.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:18:22.203 00:18:22.203 --- 10.0.0.2 ping statistics --- 00:18:22.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.203 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=77534 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 77534 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 77534 ']' 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.203 09:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:22.462 [2024-12-13 09:20:16.207234] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:22.462 [2024-12-13 09:20:16.207423] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.721 [2024-12-13 09:20:16.395670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.721 [2024-12-13 09:20:16.520601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.721 [2024-12-13 09:20:16.520715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.721 [2024-12-13 09:20:16.520752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.721 [2024-12-13 09:20:16.520770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.721 [2024-12-13 09:20:16.520788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.721 [2024-12-13 09:20:16.522266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.980 [2024-12-13 09:20:16.705656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.0vm 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.0vm 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.0vm 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.0vm 00:18:23.546 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.546 [2024-12-13 09:20:17.391250] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.546 [2024-12-13 09:20:17.407251] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.546 [2024-12-13 09:20:17.407854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:23.805 malloc0 00:18:23.805 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.805 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=77576 00:18:23.805 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.805 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 77576 /var/tmp/bdevperf.sock 00:18:23.805 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 77576 ']' 00:18:23.805 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.805 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.805 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.805 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.805 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:23.805 [2024-12-13 09:20:17.649156] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:23.805 [2024-12-13 09:20:17.649373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77576 ] 00:18:24.062 [2024-12-13 09:20:17.823402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.062 [2024-12-13 09:20:17.913935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.321 [2024-12-13 09:20:18.070670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:24.888 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.888 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:24.888 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.0vm 00:18:24.888 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:25.146 [2024-12-13 09:20:18.960827] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.405 TLSTESTn1 00:18:25.405 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.405 Running I/O for 10 seconds... 00:18:27.719 2816.00 IOPS, 11.00 MiB/s [2024-12-13T09:20:22.177Z] 2942.50 IOPS, 11.49 MiB/s [2024-12-13T09:20:23.555Z] 2970.33 IOPS, 11.60 MiB/s [2024-12-13T09:20:24.498Z] 2996.75 IOPS, 11.71 MiB/s [2024-12-13T09:20:25.435Z] 3008.00 IOPS, 11.75 MiB/s [2024-12-13T09:20:26.394Z] 3026.50 IOPS, 11.82 MiB/s [2024-12-13T09:20:27.331Z] 3018.00 IOPS, 11.79 MiB/s [2024-12-13T09:20:28.266Z] 3046.50 IOPS, 11.90 MiB/s [2024-12-13T09:20:29.203Z] 3066.22 IOPS, 11.98 MiB/s [2024-12-13T09:20:29.203Z] 3079.40 IOPS, 12.03 MiB/s 00:18:35.313 Latency(us) 00:18:35.313 [2024-12-13T09:20:29.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.313 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:35.313 Verification LBA range: start 0x0 length 0x2000 00:18:35.313 TLSTESTn1 : 10.02 3085.31 12.05 0.00 0.00 41408.50 7983.48 41466.41 00:18:35.313 [2024-12-13T09:20:29.203Z] =================================================================================================================== 00:18:35.313 [2024-12-13T09:20:29.203Z] Total : 3085.31 12.05 0.00 0.00 41408.50 7983.48 41466.41 00:18:35.313 { 00:18:35.313 "results": [ 00:18:35.313 { 00:18:35.313 "job": "TLSTESTn1", 00:18:35.313 "core_mask": "0x4", 00:18:35.314 "workload": "verify", 00:18:35.314 "status": "finished", 00:18:35.314 "verify_range": { 00:18:35.314 "start": 0, 00:18:35.314 "length": 8192 00:18:35.314 }, 00:18:35.314 "queue_depth": 128, 00:18:35.314 "io_size": 4096, 00:18:35.314 "runtime": 10.02071, 00:18:35.314 "iops": 3085.3103223224703, 00:18:35.314 "mibps": 12.05199344657215, 00:18:35.314 "io_failed": 0, 00:18:35.314 "io_timeout": 0, 00:18:35.314 "avg_latency_us": 41408.49843245993, 00:18:35.314 "min_latency_us": 7983.476363636363, 00:18:35.314 "max_latency_us": 41466.41454545454 00:18:35.314 } 00:18:35.314 ], 00:18:35.314 "core_count": 1 00:18:35.314 } 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:35.573 nvmf_trace.0 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 77576 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 77576 ']' 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 77576 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77576 00:18:35.573 killing process with pid 77576 00:18:35.573 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.573 00:18:35.573 Latency(us) 00:18:35.573 [2024-12-13T09:20:29.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.573 [2024-12-13T09:20:29.463Z] =================================================================================================================== 00:18:35.573 [2024-12-13T09:20:29.463Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77576' 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 77576 00:18:35.573 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 77576 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:36.510 rmmod nvme_tcp 00:18:36.510 rmmod nvme_fabrics 00:18:36.510 rmmod nvme_keyring 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 77534 ']' 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 77534 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 77534 ']' 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 77534 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.510 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77534 00:18:36.770 killing process with pid 77534 00:18:36.770 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:36.770 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:36.770 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77534' 00:18:36.770 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 77534 00:18:36.770 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 77534 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:37.707 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.0vm 00:18:37.966 ************************************ 00:18:37.966 END TEST nvmf_fips 00:18:37.966 ************************************ 00:18:37.966 00:18:37.966 real 0m16.448s 00:18:37.966 user 0m23.654s 00:18:37.966 sys 0m5.354s 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.966 ************************************ 00:18:37.966 START TEST nvmf_control_msg_list 00:18:37.966 ************************************ 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:37.966 * Looking for test storage... 00:18:37.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:37.966 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:38.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.226 --rc genhtml_branch_coverage=1 00:18:38.226 --rc genhtml_function_coverage=1 00:18:38.226 --rc genhtml_legend=1 00:18:38.226 --rc geninfo_all_blocks=1 00:18:38.226 --rc geninfo_unexecuted_blocks=1 00:18:38.226 00:18:38.226 ' 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:38.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.226 --rc genhtml_branch_coverage=1 00:18:38.226 --rc genhtml_function_coverage=1 00:18:38.226 --rc genhtml_legend=1 00:18:38.226 --rc geninfo_all_blocks=1 00:18:38.226 --rc geninfo_unexecuted_blocks=1 00:18:38.226 00:18:38.226 ' 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:38.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.226 --rc genhtml_branch_coverage=1 00:18:38.226 --rc genhtml_function_coverage=1 00:18:38.226 --rc genhtml_legend=1 00:18:38.226 --rc geninfo_all_blocks=1 00:18:38.226 --rc geninfo_unexecuted_blocks=1 00:18:38.226 00:18:38.226 ' 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:38.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.226 --rc genhtml_branch_coverage=1 00:18:38.226 --rc genhtml_function_coverage=1 00:18:38.226 --rc genhtml_legend=1 00:18:38.226 --rc geninfo_all_blocks=1 00:18:38.226 --rc geninfo_unexecuted_blocks=1 00:18:38.226 00:18:38.226 ' 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.226 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:38.227 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:38.227 Cannot find device "nvmf_init_br" 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:38.227 Cannot find device "nvmf_init_br2" 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:38.227 Cannot find device "nvmf_tgt_br" 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:38.227 Cannot find device "nvmf_tgt_br2" 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:38.227 Cannot find device "nvmf_init_br" 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:38.227 Cannot find device "nvmf_init_br2" 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:18:38.227 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:38.227 Cannot find device "nvmf_tgt_br" 00:18:38.227 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:18:38.227 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:38.227 Cannot find device "nvmf_tgt_br2" 00:18:38.227 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:18:38.227 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:38.227 Cannot find device "nvmf_br" 00:18:38.227 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:18:38.227 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:38.227 Cannot find device "nvmf_init_if" 00:18:38.227 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:18:38.227 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:38.227 Cannot find device "nvmf_init_if2" 00:18:38.227 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:38.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:38.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:38.228 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:38.487 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:38.487 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:18:38.487 00:18:38.487 --- 10.0.0.3 ping statistics --- 00:18:38.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.487 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:38.487 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:38.487 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:18:38.487 00:18:38.487 --- 10.0.0.4 ping statistics --- 00:18:38.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.487 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:38.487 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:38.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:38.487 00:18:38.487 --- 10.0.0.1 ping statistics --- 00:18:38.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.488 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:38.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:18:38.488 00:18:38.488 --- 10.0.0.2 ping statistics --- 00:18:38.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.488 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=77975 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 77975 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 77975 ']' 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.488 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:38.747 [2024-12-13 09:20:32.442911] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:38.747 [2024-12-13 09:20:32.443271] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.747 [2024-12-13 09:20:32.632204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.006 [2024-12-13 09:20:32.758021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.006 [2024-12-13 09:20:32.758383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.006 [2024-12-13 09:20:32.758424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.006 [2024-12-13 09:20:32.758455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.006 [2024-12-13 09:20:32.758472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.006 [2024-12-13 09:20:32.760051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.265 [2024-12-13 09:20:32.955747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.527 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:39.527 [2024-12-13 09:20:33.413117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.786 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.786 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:39.787 Malloc0 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:39.787 [2024-12-13 09:20:33.470659] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=78007 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=78008 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=78009 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 78007 00:18:39.787 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:40.045 [2024-12-13 09:20:33.719478] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:40.045 [2024-12-13 09:20:33.729748] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:40.045 [2024-12-13 09:20:33.740517] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:40.983 Initializing NVMe Controllers 00:18:40.983 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:40.983 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:40.983 Initialization complete. Launching workers. 00:18:40.983 ======================================================== 00:18:40.983 Latency(us) 00:18:40.983 Device Information : IOPS MiB/s Average min max 00:18:40.983 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2769.95 10.82 360.57 183.75 1542.01 00:18:40.983 ======================================================== 00:18:40.983 Total : 2769.95 10.82 360.57 183.75 1542.01 00:18:40.983 00:18:40.983 Initializing NVMe Controllers 00:18:40.983 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:40.983 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:40.983 Initialization complete. Launching workers. 00:18:40.983 ======================================================== 00:18:40.983 Latency(us) 00:18:40.983 Device Information : IOPS MiB/s Average min max 00:18:40.983 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2772.00 10.83 360.25 200.84 1140.55 00:18:40.983 ======================================================== 00:18:40.983 Total : 2772.00 10.83 360.25 200.84 1140.55 00:18:40.983 00:18:40.983 Initializing NVMe Controllers 00:18:40.983 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:40.983 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:40.983 Initialization complete. Launching workers. 00:18:40.983 ======================================================== 00:18:40.983 Latency(us) 00:18:40.983 Device Information : IOPS MiB/s Average min max 00:18:40.983 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2786.00 10.88 358.31 148.49 731.72 00:18:40.983 ======================================================== 00:18:40.983 Total : 2786.00 10.88 358.31 148.49 731.72 00:18:40.983 00:18:40.983 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 78008 00:18:40.983 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 78009 00:18:40.983 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:40.983 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:40.983 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:40.983 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:40.983 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:40.983 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:40.983 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:40.983 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:40.983 rmmod nvme_tcp 00:18:40.983 rmmod nvme_fabrics 00:18:41.242 rmmod nvme_keyring 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 77975 ']' 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 77975 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 77975 ']' 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 77975 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77975 00:18:41.243 killing process with pid 77975 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77975' 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 77975 00:18:41.243 09:20:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 77975 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:42.178 09:20:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:42.178 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:42.178 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:42.178 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:42.178 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.178 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.178 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.438 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:18:42.438 00:18:42.438 real 0m4.382s 00:18:42.438 user 0m6.579s 00:18:42.438 sys 0m1.509s 00:18:42.438 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.438 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:42.438 ************************************ 00:18:42.439 END TEST nvmf_control_msg_list 00:18:42.439 ************************************ 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:42.439 ************************************ 00:18:42.439 START TEST nvmf_wait_for_buf 00:18:42.439 ************************************ 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:42.439 * Looking for test storage... 00:18:42.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:42.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.439 --rc genhtml_branch_coverage=1 00:18:42.439 --rc genhtml_function_coverage=1 00:18:42.439 --rc genhtml_legend=1 00:18:42.439 --rc geninfo_all_blocks=1 00:18:42.439 --rc geninfo_unexecuted_blocks=1 00:18:42.439 00:18:42.439 ' 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:42.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.439 --rc genhtml_branch_coverage=1 00:18:42.439 --rc genhtml_function_coverage=1 00:18:42.439 --rc genhtml_legend=1 00:18:42.439 --rc geninfo_all_blocks=1 00:18:42.439 --rc geninfo_unexecuted_blocks=1 00:18:42.439 00:18:42.439 ' 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:42.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.439 --rc genhtml_branch_coverage=1 00:18:42.439 --rc genhtml_function_coverage=1 00:18:42.439 --rc genhtml_legend=1 00:18:42.439 --rc geninfo_all_blocks=1 00:18:42.439 --rc geninfo_unexecuted_blocks=1 00:18:42.439 00:18:42.439 ' 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:42.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.439 --rc genhtml_branch_coverage=1 00:18:42.439 --rc genhtml_function_coverage=1 00:18:42.439 --rc genhtml_legend=1 00:18:42.439 --rc geninfo_all_blocks=1 00:18:42.439 --rc geninfo_unexecuted_blocks=1 00:18:42.439 00:18:42.439 ' 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.439 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.440 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.440 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.699 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.699 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:42.700 Cannot find device "nvmf_init_br" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:42.700 Cannot find device "nvmf_init_br2" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:42.700 Cannot find device "nvmf_tgt_br" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.700 Cannot find device "nvmf_tgt_br2" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:42.700 Cannot find device "nvmf_init_br" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:42.700 Cannot find device "nvmf_init_br2" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:42.700 Cannot find device "nvmf_tgt_br" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:42.700 Cannot find device "nvmf_tgt_br2" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:42.700 Cannot find device "nvmf_br" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:42.700 Cannot find device "nvmf_init_if" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:42.700 Cannot find device "nvmf_init_if2" 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:42.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:42.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:42.700 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:42.960 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:42.960 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:18:42.960 00:18:42.960 --- 10.0.0.3 ping statistics --- 00:18:42.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.960 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:42.960 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:42.960 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:18:42.960 00:18:42.960 --- 10.0.0.4 ping statistics --- 00:18:42.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.960 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:42.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:42.960 00:18:42.960 --- 10.0.0.1 ping statistics --- 00:18:42.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.960 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:42.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:18:42.960 00:18:42.960 --- 10.0.0.2 ping statistics --- 00:18:42.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.960 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:42.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=78262 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 78262 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 78262 ']' 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.960 09:20:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:43.219 [2024-12-13 09:20:36.851003] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:43.219 [2024-12-13 09:20:36.851184] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.219 [2024-12-13 09:20:37.032519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.479 [2024-12-13 09:20:37.123330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.479 [2024-12-13 09:20:37.123439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.479 [2024-12-13 09:20:37.123475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.479 [2024-12-13 09:20:37.123498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.479 [2024-12-13 09:20:37.123513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.479 [2024-12-13 09:20:37.124694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.048 09:20:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:44.308 [2024-12-13 09:20:37.948971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:44.308 Malloc0 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:44.308 [2024-12-13 09:20:38.095886] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:44.308 [2024-12-13 09:20:38.124110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.308 09:20:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:44.568 [2024-12-13 09:20:38.386557] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:45.989 Initializing NVMe Controllers 00:18:45.989 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:45.989 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:45.989 Initialization complete. Launching workers. 00:18:45.989 ======================================================== 00:18:45.990 Latency(us) 00:18:45.990 Device Information : IOPS MiB/s Average min max 00:18:45.990 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.04 62.38 8016.44 4942.75 12017.97 00:18:45.990 ======================================================== 00:18:45.990 Total : 499.04 62.38 8016.44 4942.75 12017.97 00:18:45.990 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:45.990 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:45.990 rmmod nvme_tcp 00:18:45.990 rmmod nvme_fabrics 00:18:46.249 rmmod nvme_keyring 00:18:46.249 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.249 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:46.249 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:46.249 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 78262 ']' 00:18:46.249 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 78262 00:18:46.249 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 78262 ']' 00:18:46.249 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 78262 00:18:46.250 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:18:46.250 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.250 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78262 00:18:46.250 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.250 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.250 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78262' 00:18:46.250 killing process with pid 78262 00:18:46.250 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 78262 00:18:46.250 09:20:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 78262 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:47.188 09:20:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:47.188 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:47.188 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.188 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.188 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.188 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:18:47.188 ************************************ 00:18:47.188 END TEST nvmf_wait_for_buf 00:18:47.188 ************************************ 00:18:47.188 00:18:47.188 real 0m4.936s 00:18:47.188 user 0m4.418s 00:18:47.188 sys 0m0.930s 00:18:47.188 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.188 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:47.448 ************************************ 00:18:47.448 START TEST nvmf_fuzz 00:18:47.448 ************************************ 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:47.448 * Looking for test storage... 00:18:47.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:47.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.448 --rc genhtml_branch_coverage=1 00:18:47.448 --rc genhtml_function_coverage=1 00:18:47.448 --rc genhtml_legend=1 00:18:47.448 --rc geninfo_all_blocks=1 00:18:47.448 --rc geninfo_unexecuted_blocks=1 00:18:47.448 00:18:47.448 ' 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:47.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.448 --rc genhtml_branch_coverage=1 00:18:47.448 --rc genhtml_function_coverage=1 00:18:47.448 --rc genhtml_legend=1 00:18:47.448 --rc geninfo_all_blocks=1 00:18:47.448 --rc geninfo_unexecuted_blocks=1 00:18:47.448 00:18:47.448 ' 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:47.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.448 --rc genhtml_branch_coverage=1 00:18:47.448 --rc genhtml_function_coverage=1 00:18:47.448 --rc genhtml_legend=1 00:18:47.448 --rc geninfo_all_blocks=1 00:18:47.448 --rc geninfo_unexecuted_blocks=1 00:18:47.448 00:18:47.448 ' 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:47.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.448 --rc genhtml_branch_coverage=1 00:18:47.448 --rc genhtml_function_coverage=1 00:18:47.448 --rc genhtml_legend=1 00:18:47.448 --rc geninfo_all_blocks=1 00:18:47.448 --rc geninfo_unexecuted_blocks=1 00:18:47.448 00:18:47.448 ' 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.448 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.449 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.449 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.708 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:47.708 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:47.708 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:47.708 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:47.708 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:47.708 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:47.709 Cannot find device "nvmf_init_br" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:47.709 Cannot find device "nvmf_init_br2" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:47.709 Cannot find device "nvmf_tgt_br" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:47.709 Cannot find device "nvmf_tgt_br2" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:47.709 Cannot find device "nvmf_init_br" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:47.709 Cannot find device "nvmf_init_br2" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:47.709 Cannot find device "nvmf_tgt_br" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:47.709 Cannot find device "nvmf_tgt_br2" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:47.709 Cannot find device "nvmf_br" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:47.709 Cannot find device "nvmf_init_if" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:47.709 Cannot find device "nvmf_init_if2" 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:47.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:47.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:47.709 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:47.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:47.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:18:47.969 00:18:47.969 --- 10.0.0.3 ping statistics --- 00:18:47.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.969 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:47.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:47.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:18:47.969 00:18:47.969 --- 10.0.0.4 ping statistics --- 00:18:47.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.969 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:47.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:18:47.969 00:18:47.969 --- 10.0.0.1 ping statistics --- 00:18:47.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.969 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:47.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:18:47.969 00:18:47.969 --- 10.0.0.2 ping statistics --- 00:18:47.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.969 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78572 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78572 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 78572 ']' 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.969 09:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:49.348 Malloc0 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:18:49.348 09:20:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:18:49.916 Shutting down the fuzz application 00:18:49.916 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:50.484 Shutting down the fuzz application 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.484 rmmod nvme_tcp 00:18:50.484 rmmod nvme_fabrics 00:18:50.484 rmmod nvme_keyring 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 78572 ']' 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 78572 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 78572 ']' 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 78572 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78572 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.484 killing process with pid 78572 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78572' 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 78572 00:18:50.484 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 78572 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:51.863 00:18:51.863 real 0m4.488s 00:18:51.863 user 0m4.890s 00:18:51.863 sys 0m0.917s 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:51.863 ************************************ 00:18:51.863 END TEST nvmf_fuzz 00:18:51.863 ************************************ 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:51.863 09:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.864 09:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:51.864 ************************************ 00:18:51.864 START TEST nvmf_multiconnection 00:18:51.864 ************************************ 00:18:51.864 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:51.864 * Looking for test storage... 00:18:51.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:51.864 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:51.864 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:18:51.864 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.124 --rc genhtml_branch_coverage=1 00:18:52.124 --rc genhtml_function_coverage=1 00:18:52.124 --rc genhtml_legend=1 00:18:52.124 --rc geninfo_all_blocks=1 00:18:52.124 --rc geninfo_unexecuted_blocks=1 00:18:52.124 00:18:52.124 ' 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.124 --rc genhtml_branch_coverage=1 00:18:52.124 --rc genhtml_function_coverage=1 00:18:52.124 --rc genhtml_legend=1 00:18:52.124 --rc geninfo_all_blocks=1 00:18:52.124 --rc geninfo_unexecuted_blocks=1 00:18:52.124 00:18:52.124 ' 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.124 --rc genhtml_branch_coverage=1 00:18:52.124 --rc genhtml_function_coverage=1 00:18:52.124 --rc genhtml_legend=1 00:18:52.124 --rc geninfo_all_blocks=1 00:18:52.124 --rc geninfo_unexecuted_blocks=1 00:18:52.124 00:18:52.124 ' 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:52.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.124 --rc genhtml_branch_coverage=1 00:18:52.124 --rc genhtml_function_coverage=1 00:18:52.124 --rc genhtml_legend=1 00:18:52.124 --rc geninfo_all_blocks=1 00:18:52.124 --rc geninfo_unexecuted_blocks=1 00:18:52.124 00:18:52.124 ' 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.124 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:52.125 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:52.125 Cannot find device "nvmf_init_br" 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:52.125 Cannot find device "nvmf_init_br2" 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:52.125 Cannot find device "nvmf_tgt_br" 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.125 Cannot find device "nvmf_tgt_br2" 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:52.125 Cannot find device "nvmf_init_br" 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:52.125 Cannot find device "nvmf_init_br2" 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:18:52.125 09:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:52.125 Cannot find device "nvmf_tgt_br" 00:18:52.125 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:18:52.125 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:52.384 Cannot find device "nvmf_tgt_br2" 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:52.384 Cannot find device "nvmf_br" 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:52.384 Cannot find device "nvmf_init_if" 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:52.384 Cannot find device "nvmf_init_if2" 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:52.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:52.384 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:52.385 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:52.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:52.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:52.644 00:18:52.644 --- 10.0.0.3 ping statistics --- 00:18:52.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.644 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:52.644 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:52.644 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:18:52.644 00:18:52.644 --- 10.0.0.4 ping statistics --- 00:18:52.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.644 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:52.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:52.644 00:18:52.644 --- 10.0.0.1 ping statistics --- 00:18:52.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.644 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:52.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:18:52.644 00:18:52.644 --- 10.0.0.2 ping statistics --- 00:18:52.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.644 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=78839 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 78839 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 78839 ']' 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.644 09:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:52.644 [2024-12-13 09:20:46.489710] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:52.644 [2024-12-13 09:20:46.489876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.903 [2024-12-13 09:20:46.673549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.903 [2024-12-13 09:20:46.760631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.903 [2024-12-13 09:20:46.760707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.903 [2024-12-13 09:20:46.760740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.903 [2024-12-13 09:20:46.760752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.903 [2024-12-13 09:20:46.760763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.903 [2024-12-13 09:20:46.762389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.903 [2024-12-13 09:20:46.762526] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.903 [2024-12-13 09:20:46.762677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.903 [2024-12-13 09:20:46.762740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.162 [2024-12-13 09:20:46.937136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.729 [2024-12-13 09:20:47.509677] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.729 Malloc1 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.729 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 [2024-12-13 09:20:47.630039] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 Malloc2 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 Malloc3 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.989 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.248 Malloc4 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.248 Malloc5 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:18:54.248 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.249 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.249 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.249 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.249 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:54.249 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.249 09:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.249 Malloc6 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.249 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 Malloc7 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 Malloc8 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 Malloc9 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.508 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.767 Malloc10 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.767 Malloc11 00:18:54.767 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.768 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:18:55.026 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:55.026 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:55.026 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.026 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:55.026 09:20:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:56.929 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:56.929 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:56.929 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:18:56.929 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:56.929 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.929 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:56.929 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:56.929 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:18:57.187 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:57.187 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:57.187 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.187 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:57.188 09:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:59.090 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:59.090 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:59.090 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:18:59.090 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:59.090 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.090 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:59.090 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.090 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:18:59.349 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:59.349 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:59.349 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.349 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:59.349 09:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:01.252 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:01.252 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:01.252 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:19:01.252 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:01.252 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.252 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:01.252 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:01.252 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:19:01.512 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:19:01.512 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:01.512 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.512 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:01.512 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:03.442 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:05.973 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:07.875 09:21:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:09.777 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:09.777 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:09.777 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:19:09.777 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:09.777 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.777 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:09.777 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.777 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:19:10.036 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:10.036 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:10.036 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:10.036 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:10.036 09:21:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:11.937 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:11.937 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:11.937 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:19:12.195 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:12.195 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:12.195 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:12.195 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:12.195 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:19:12.195 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:12.195 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:12.195 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:12.195 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:12.195 09:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:14.097 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:14.097 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:14.097 09:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:19:14.356 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:14.356 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:14.356 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:14.356 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:14.356 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:19:14.356 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:14.356 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:14.356 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:14.356 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:14.356 09:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:16.887 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:16.888 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:18.791 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:18.791 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:18.791 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:19:18.791 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:18.791 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:18.791 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:18.791 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:18.791 [global] 00:19:18.791 thread=1 00:19:18.791 invalidate=1 00:19:18.791 rw=read 00:19:18.791 time_based=1 00:19:18.791 runtime=10 00:19:18.791 ioengine=libaio 00:19:18.791 direct=1 00:19:18.791 bs=262144 00:19:18.791 iodepth=64 00:19:18.791 norandommap=1 00:19:18.791 numjobs=1 00:19:18.791 00:19:18.791 [job0] 00:19:18.791 filename=/dev/nvme0n1 00:19:18.791 [job1] 00:19:18.791 filename=/dev/nvme10n1 00:19:18.791 [job2] 00:19:18.791 filename=/dev/nvme1n1 00:19:18.791 [job3] 00:19:18.791 filename=/dev/nvme2n1 00:19:18.791 [job4] 00:19:18.791 filename=/dev/nvme3n1 00:19:18.791 [job5] 00:19:18.791 filename=/dev/nvme4n1 00:19:18.791 [job6] 00:19:18.791 filename=/dev/nvme5n1 00:19:18.791 [job7] 00:19:18.791 filename=/dev/nvme6n1 00:19:18.791 [job8] 00:19:18.791 filename=/dev/nvme7n1 00:19:18.791 [job9] 00:19:18.791 filename=/dev/nvme8n1 00:19:18.791 [job10] 00:19:18.791 filename=/dev/nvme9n1 00:19:18.791 Could not set queue depth (nvme0n1) 00:19:18.791 Could not set queue depth (nvme10n1) 00:19:18.791 Could not set queue depth (nvme1n1) 00:19:18.791 Could not set queue depth (nvme2n1) 00:19:18.791 Could not set queue depth (nvme3n1) 00:19:18.791 Could not set queue depth (nvme4n1) 00:19:18.791 Could not set queue depth (nvme5n1) 00:19:18.791 Could not set queue depth (nvme6n1) 00:19:18.791 Could not set queue depth (nvme7n1) 00:19:18.791 Could not set queue depth (nvme8n1) 00:19:18.791 Could not set queue depth (nvme9n1) 00:19:18.791 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:18.791 fio-3.35 00:19:18.791 Starting 11 threads 00:19:31.009 00:19:31.009 job0: (groupid=0, jobs=1): err= 0: pid=79300: Fri Dec 13 09:21:23 2024 00:19:31.009 read: IOPS=106, BW=26.7MiB/s (28.0MB/s)(271MiB/10147msec) 00:19:31.009 slat (usec): min=20, max=183687, avg=9255.17, stdev=24021.95 00:19:31.009 clat (msec): min=16, max=842, avg=589.42, stdev=136.26 00:19:31.009 lat (msec): min=16, max=842, avg=598.67, stdev=138.18 00:19:31.009 clat percentiles (msec): 00:19:31.009 | 1.00th=[ 22], 5.00th=[ 275], 10.00th=[ 405], 20.00th=[ 558], 00:19:31.009 | 30.00th=[ 584], 40.00th=[ 609], 50.00th=[ 625], 60.00th=[ 642], 00:19:31.009 | 70.00th=[ 659], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 718], 00:19:31.009 | 99.00th=[ 751], 99.50th=[ 818], 99.90th=[ 844], 99.95th=[ 844], 00:19:31.009 | 99.99th=[ 844] 00:19:31.009 bw ( KiB/s): min=20480, max=38400, per=2.82%, avg=26086.40, stdev=4474.24, samples=20 00:19:31.009 iops : min= 80, max= 150, avg=101.90, stdev=17.48, samples=20 00:19:31.009 lat (msec) : 20=0.55%, 50=0.46%, 100=0.74%, 250=2.59%, 500=8.59% 00:19:31.009 lat (msec) : 750=86.43%, 1000=0.65% 00:19:31.009 cpu : usr=0.03%, sys=0.54%, ctx=225, majf=0, minf=4097 00:19:31.009 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:19:31.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.009 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.009 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.009 job1: (groupid=0, jobs=1): err= 0: pid=79301: Fri Dec 13 09:21:23 2024 00:19:31.009 read: IOPS=113, BW=28.4MiB/s (29.8MB/s)(288MiB/10155msec) 00:19:31.009 slat (usec): min=19, max=327188, avg=8573.76, stdev=26106.68 00:19:31.009 clat (msec): min=14, max=804, avg=554.34, stdev=191.42 00:19:31.009 lat (msec): min=15, max=922, avg=562.91, stdev=194.46 00:19:31.009 clat percentiles (msec): 00:19:31.009 | 1.00th=[ 48], 5.00th=[ 109], 10.00th=[ 174], 20.00th=[ 502], 00:19:31.009 | 30.00th=[ 575], 40.00th=[ 600], 50.00th=[ 617], 60.00th=[ 634], 00:19:31.009 | 70.00th=[ 659], 80.00th=[ 684], 90.00th=[ 726], 95.00th=[ 743], 00:19:31.009 | 99.00th=[ 776], 99.50th=[ 793], 99.90th=[ 802], 99.95th=[ 802], 00:19:31.009 | 99.99th=[ 802] 00:19:31.009 bw ( KiB/s): min=16896, max=83968, per=3.01%, avg=27878.40, stdev=13740.89, samples=20 00:19:31.009 iops : min= 66, max= 328, avg=108.90, stdev=53.68, samples=20 00:19:31.009 lat (msec) : 20=0.17%, 50=1.39%, 100=2.78%, 250=10.84%, 500=4.08% 00:19:31.009 lat (msec) : 750=78.14%, 1000=2.60% 00:19:31.009 cpu : usr=0.08%, sys=0.44%, ctx=253, majf=0, minf=4098 00:19:31.009 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:19:31.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.009 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.009 issued rwts: total=1153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.009 job2: (groupid=0, jobs=1): err= 0: pid=79302: Fri Dec 13 09:21:23 2024 00:19:31.009 read: IOPS=521, BW=130MiB/s (137MB/s)(1308MiB/10040msec) 00:19:31.009 slat (usec): min=20, max=181619, avg=1906.69, stdev=5279.16 00:19:31.009 clat (msec): min=35, max=326, avg=120.78, stdev=26.51 00:19:31.009 lat (msec): min=42, max=334, avg=122.69, stdev=26.68 00:19:31.009 clat percentiles (msec): 00:19:31.009 | 1.00th=[ 79], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 110], 00:19:31.009 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 120], 00:19:31.009 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 134], 95.00th=[ 144], 00:19:31.009 | 99.00th=[ 271], 99.50th=[ 292], 99.90th=[ 296], 99.95th=[ 326], 00:19:31.009 | 99.99th=[ 326] 00:19:31.009 bw ( KiB/s): min=37376, max=146944, per=14.31%, avg=132315.30, stdev=23316.18, samples=20 00:19:31.009 iops : min= 146, max= 574, avg=516.75, stdev=91.04, samples=20 00:19:31.009 lat (msec) : 50=0.15%, 100=3.98%, 250=93.84%, 500=2.03% 00:19:31.009 cpu : usr=0.30%, sys=2.25%, ctx=1097, majf=0, minf=4097 00:19:31.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:31.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.009 issued rwts: total=5231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.010 job3: (groupid=0, jobs=1): err= 0: pid=79303: Fri Dec 13 09:21:23 2024 00:19:31.010 read: IOPS=104, BW=26.1MiB/s (27.3MB/s)(265MiB/10147msec) 00:19:31.010 slat (usec): min=20, max=274114, avg=9449.39, stdev=26654.74 00:19:31.010 clat (msec): min=18, max=877, avg=603.43, stdev=116.05 00:19:31.010 lat (msec): min=20, max=877, avg=612.88, stdev=117.84 00:19:31.010 clat percentiles (msec): 00:19:31.010 | 1.00th=[ 182], 5.00th=[ 351], 10.00th=[ 451], 20.00th=[ 567], 00:19:31.010 | 30.00th=[ 584], 40.00th=[ 609], 50.00th=[ 634], 60.00th=[ 651], 00:19:31.010 | 70.00th=[ 667], 80.00th=[ 684], 90.00th=[ 709], 95.00th=[ 718], 00:19:31.010 | 99.00th=[ 768], 99.50th=[ 768], 99.90th=[ 768], 99.95th=[ 877], 00:19:31.010 | 99.99th=[ 877] 00:19:31.010 bw ( KiB/s): min=17920, max=34304, per=2.75%, avg=25472.00, stdev=4612.12, samples=20 00:19:31.010 iops : min= 70, max= 134, avg=99.50, stdev=18.02, samples=20 00:19:31.010 lat (msec) : 20=0.09%, 50=0.47%, 250=1.51%, 500=10.21%, 750=86.58% 00:19:31.010 lat (msec) : 1000=1.13% 00:19:31.010 cpu : usr=0.06%, sys=0.40%, ctx=218, majf=0, minf=4097 00:19:31.010 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:19:31.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.010 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.010 issued rwts: total=1058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.010 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.010 job4: (groupid=0, jobs=1): err= 0: pid=79304: Fri Dec 13 09:21:23 2024 00:19:31.010 read: IOPS=104, BW=26.1MiB/s (27.4MB/s)(265MiB/10147msec) 00:19:31.010 slat (usec): min=20, max=304469, avg=9439.27, stdev=26068.89 00:19:31.010 clat (msec): min=19, max=784, avg=602.30, stdev=108.51 00:19:31.010 lat (msec): min=20, max=860, avg=611.73, stdev=110.18 00:19:31.010 clat percentiles (msec): 00:19:31.010 | 1.00th=[ 48], 5.00th=[ 401], 10.00th=[ 542], 20.00th=[ 575], 00:19:31.010 | 30.00th=[ 592], 40.00th=[ 600], 50.00th=[ 617], 60.00th=[ 634], 00:19:31.010 | 70.00th=[ 651], 80.00th=[ 676], 90.00th=[ 693], 95.00th=[ 709], 00:19:31.010 | 99.00th=[ 735], 99.50th=[ 743], 99.90th=[ 760], 99.95th=[ 785], 00:19:31.010 | 99.99th=[ 785] 00:19:31.010 bw ( KiB/s): min=18944, max=31744, per=2.76%, avg=25499.50, stdev=3411.33, samples=20 00:19:31.010 iops : min= 74, max= 124, avg=99.60, stdev=13.34, samples=20 00:19:31.010 lat (msec) : 20=0.09%, 50=1.04%, 100=0.38%, 250=1.23%, 500=4.91% 00:19:31.010 lat (msec) : 750=91.89%, 1000=0.47% 00:19:31.010 cpu : usr=0.05%, sys=0.53%, ctx=208, majf=0, minf=4097 00:19:31.010 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.1% 00:19:31.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.010 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.010 issued rwts: total=1060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.010 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.010 job5: (groupid=0, jobs=1): err= 0: pid=79305: Fri Dec 13 09:21:23 2024 00:19:31.010 read: IOPS=233, BW=58.5MiB/s (61.3MB/s)(590MiB/10086msec) 00:19:31.010 slat (usec): min=20, max=70517, avg=4241.50, stdev=10316.11 00:19:31.010 clat (msec): min=32, max=387, avg=268.99, stdev=38.14 00:19:31.010 lat (msec): min=32, max=396, avg=273.23, stdev=38.50 00:19:31.010 clat percentiles (msec): 00:19:31.010 | 1.00th=[ 101], 5.00th=[ 224], 10.00th=[ 239], 20.00th=[ 255], 00:19:31.010 | 30.00th=[ 264], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:19:31.010 | 70.00th=[ 284], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 313], 00:19:31.010 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 388], 00:19:31.010 | 99.99th=[ 388] 00:19:31.010 bw ( KiB/s): min=51200, max=67719, per=6.36%, avg=58795.05, stdev=4066.22, samples=20 00:19:31.010 iops : min= 200, max= 264, avg=229.55, stdev=15.85, samples=20 00:19:31.010 lat (msec) : 50=0.72%, 100=0.13%, 250=15.51%, 500=83.64% 00:19:31.010 cpu : usr=0.16%, sys=1.01%, ctx=459, majf=0, minf=4097 00:19:31.010 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:19:31.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.010 issued rwts: total=2360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.010 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.010 job6: (groupid=0, jobs=1): err= 0: pid=79306: Fri Dec 13 09:21:23 2024 00:19:31.010 read: IOPS=524, BW=131MiB/s (138MB/s)(1317MiB/10043msec) 00:19:31.010 slat (usec): min=20, max=118376, avg=1892.40, stdev=4924.40 00:19:31.010 clat (msec): min=40, max=317, avg=119.95, stdev=23.11 00:19:31.010 lat (msec): min=52, max=324, avg=121.84, stdev=23.30 00:19:31.010 clat percentiles (msec): 00:19:31.010 | 1.00th=[ 85], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 110], 00:19:31.010 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 117], 60.00th=[ 120], 00:19:31.010 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 134], 95.00th=[ 144], 00:19:31.010 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 317], 99.95th=[ 317], 00:19:31.010 | 99.99th=[ 317] 00:19:31.010 bw ( KiB/s): min=66180, max=145408, per=14.41%, avg=133280.20, stdev=17376.49, samples=20 00:19:31.010 iops : min= 258, max= 568, avg=520.60, stdev=67.98, samples=20 00:19:31.010 lat (msec) : 50=0.02%, 100=3.81%, 250=95.07%, 500=1.10% 00:19:31.010 cpu : usr=0.42%, sys=2.17%, ctx=1063, majf=0, minf=4097 00:19:31.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:31.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.010 issued rwts: total=5269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.010 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.010 job7: (groupid=0, jobs=1): err= 0: pid=79307: Fri Dec 13 09:21:23 2024 00:19:31.010 read: IOPS=235, BW=58.8MiB/s (61.6MB/s)(593MiB/10091msec) 00:19:31.010 slat (usec): min=20, max=82812, avg=4210.09, stdev=10281.57 00:19:31.010 clat (msec): min=16, max=376, avg=267.62, stdev=32.66 00:19:31.010 lat (msec): min=17, max=376, avg=271.83, stdev=33.00 00:19:31.010 clat percentiles (msec): 00:19:31.010 | 1.00th=[ 122], 5.00th=[ 224], 10.00th=[ 241], 20.00th=[ 253], 00:19:31.010 | 30.00th=[ 259], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 00:19:31.010 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 305], 00:19:31.010 | 99.00th=[ 326], 99.50th=[ 351], 99.90th=[ 376], 99.95th=[ 376], 00:19:31.010 | 99.99th=[ 376] 00:19:31.010 bw ( KiB/s): min=53760, max=65536, per=6.39%, avg=59110.40, stdev=3332.56, samples=20 00:19:31.010 iops : min= 210, max= 256, avg=230.90, stdev=13.02, samples=20 00:19:31.010 lat (msec) : 20=0.21%, 50=0.13%, 100=0.21%, 250=16.44%, 500=83.01% 00:19:31.010 cpu : usr=0.15%, sys=1.14%, ctx=473, majf=0, minf=4097 00:19:31.010 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.3% 00:19:31.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.010 issued rwts: total=2372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.010 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.010 job8: (groupid=0, jobs=1): err= 0: pid=79308: Fri Dec 13 09:21:23 2024 00:19:31.010 read: IOPS=231, BW=58.0MiB/s (60.8MB/s)(586MiB/10095msec) 00:19:31.010 slat (usec): min=20, max=131541, avg=4266.58, stdev=10751.78 00:19:31.010 clat (msec): min=15, max=383, avg=271.20, stdev=41.05 00:19:31.010 lat (msec): min=16, max=383, avg=275.46, stdev=41.35 00:19:31.010 clat percentiles (msec): 00:19:31.010 | 1.00th=[ 61], 5.00th=[ 222], 10.00th=[ 236], 20.00th=[ 253], 00:19:31.010 | 30.00th=[ 262], 40.00th=[ 268], 50.00th=[ 275], 60.00th=[ 279], 00:19:31.010 | 70.00th=[ 288], 80.00th=[ 296], 90.00th=[ 313], 95.00th=[ 330], 00:19:31.010 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 384], 99.95th=[ 384], 00:19:31.010 | 99.99th=[ 384] 00:19:31.010 bw ( KiB/s): min=49152, max=65536, per=6.31%, avg=58348.25, stdev=3933.59, samples=20 00:19:31.010 iops : min= 192, max= 256, avg=227.90, stdev=15.36, samples=20 00:19:31.010 lat (msec) : 20=0.17%, 100=1.49%, 250=16.44%, 500=81.90% 00:19:31.010 cpu : usr=0.11%, sys=1.11%, ctx=457, majf=0, minf=4097 00:19:31.010 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:19:31.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.010 issued rwts: total=2342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.010 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.010 job9: (groupid=0, jobs=1): err= 0: pid=79309: Fri Dec 13 09:21:23 2024 00:19:31.010 read: IOPS=1364, BW=341MiB/s (358MB/s)(3420MiB/10024msec) 00:19:31.010 slat (usec): min=19, max=19818, avg=727.92, stdev=2677.58 00:19:31.010 clat (usec): min=15919, max=75017, avg=46104.35, stdev=6001.08 00:19:31.010 lat (usec): min=16212, max=75045, avg=46832.27, stdev=6101.41 00:19:31.010 clat percentiles (usec): 00:19:31.010 | 1.00th=[34866], 5.00th=[35914], 10.00th=[36963], 20.00th=[40633], 00:19:31.010 | 30.00th=[43254], 40.00th=[44303], 50.00th=[45351], 60.00th=[49021], 00:19:31.010 | 70.00th=[50070], 80.00th=[51119], 90.00th=[52167], 95.00th=[55837], 00:19:31.010 | 99.00th=[59507], 99.50th=[60031], 99.90th=[61080], 99.95th=[62653], 00:19:31.010 | 99.99th=[65274] 00:19:31.010 bw ( KiB/s): min=333312, max=354304, per=37.69%, avg=348569.60, stdev=6203.45, samples=20 00:19:31.010 iops : min= 1302, max= 1384, avg=1361.60, stdev=24.23, samples=20 00:19:31.010 lat (msec) : 20=0.07%, 50=67.18%, 100=32.76% 00:19:31.010 cpu : usr=0.57%, sys=3.87%, ctx=1152, majf=0, minf=4097 00:19:31.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:31.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.010 issued rwts: total=13679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.011 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.011 job10: (groupid=0, jobs=1): err= 0: pid=79310: Fri Dec 13 09:21:23 2024 00:19:31.011 read: IOPS=106, BW=26.7MiB/s (28.0MB/s)(271MiB/10144msec) 00:19:31.011 slat (usec): min=19, max=288223, avg=9256.77, stdev=26478.94 00:19:31.011 clat (msec): min=74, max=827, avg=589.86, stdev=142.01 00:19:31.011 lat (msec): min=74, max=849, avg=599.11, stdev=144.51 00:19:31.011 clat percentiles (msec): 00:19:31.011 | 1.00th=[ 77], 5.00th=[ 165], 10.00th=[ 430], 20.00th=[ 567], 00:19:31.011 | 30.00th=[ 584], 40.00th=[ 609], 50.00th=[ 625], 60.00th=[ 642], 00:19:31.011 | 70.00th=[ 667], 80.00th=[ 684], 90.00th=[ 701], 95.00th=[ 718], 00:19:31.011 | 99.00th=[ 743], 99.50th=[ 751], 99.90th=[ 785], 99.95th=[ 827], 00:19:31.011 | 99.99th=[ 827] 00:19:31.011 bw ( KiB/s): min=18432, max=39503, per=2.82%, avg=26066.35, stdev=4973.66, samples=20 00:19:31.011 iops : min= 72, max= 154, avg=101.70, stdev=19.41, samples=20 00:19:31.011 lat (msec) : 100=1.29%, 250=6.19%, 500=4.62%, 750=87.62%, 1000=0.28% 00:19:31.011 cpu : usr=0.06%, sys=0.50%, ctx=204, majf=0, minf=4097 00:19:31.011 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:19:31.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.011 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.011 issued rwts: total=1082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.011 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.011 00:19:31.011 Run status group 0 (all jobs): 00:19:31.011 READ: bw=903MiB/s (947MB/s), 26.1MiB/s-341MiB/s (27.3MB/s-358MB/s), io=9172MiB (9618MB), run=10024-10155msec 00:19:31.011 00:19:31.011 Disk stats (read/write): 00:19:31.011 nvme0n1: ios=2045/0, merge=0/0, ticks=1198240/0, in_queue=1198240, util=97.88% 00:19:31.011 nvme10n1: ios=2181/0, merge=0/0, ticks=1210889/0, in_queue=1210889, util=98.05% 00:19:31.011 nvme1n1: ios=10344/0, merge=0/0, ticks=1236861/0, in_queue=1236861, util=98.09% 00:19:31.011 nvme2n1: ios=1995/0, merge=0/0, ticks=1201909/0, in_queue=1201909, util=98.23% 00:19:31.011 nvme3n1: ios=1996/0, merge=0/0, ticks=1208798/0, in_queue=1208798, util=98.31% 00:19:31.011 nvme4n1: ios=4598/0, merge=0/0, ticks=1226915/0, in_queue=1226915, util=98.45% 00:19:31.011 nvme5n1: ios=10427/0, merge=0/0, ticks=1237066/0, in_queue=1237066, util=98.67% 00:19:31.011 nvme6n1: ios=4638/0, merge=0/0, ticks=1229562/0, in_queue=1229562, util=98.75% 00:19:31.011 nvme7n1: ios=4562/0, merge=0/0, ticks=1231537/0, in_queue=1231537, util=99.00% 00:19:31.011 nvme8n1: ios=27282/0, merge=0/0, ticks=1241824/0, in_queue=1241824, util=99.10% 00:19:31.011 nvme9n1: ios=2040/0, merge=0/0, ticks=1206872/0, in_queue=1206872, util=99.07% 00:19:31.011 09:21:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:31.011 [global] 00:19:31.011 thread=1 00:19:31.011 invalidate=1 00:19:31.011 rw=randwrite 00:19:31.011 time_based=1 00:19:31.011 runtime=10 00:19:31.011 ioengine=libaio 00:19:31.011 direct=1 00:19:31.011 bs=262144 00:19:31.011 iodepth=64 00:19:31.011 norandommap=1 00:19:31.011 numjobs=1 00:19:31.011 00:19:31.011 [job0] 00:19:31.011 filename=/dev/nvme0n1 00:19:31.011 [job1] 00:19:31.011 filename=/dev/nvme10n1 00:19:31.011 [job2] 00:19:31.011 filename=/dev/nvme1n1 00:19:31.011 [job3] 00:19:31.011 filename=/dev/nvme2n1 00:19:31.011 [job4] 00:19:31.011 filename=/dev/nvme3n1 00:19:31.011 [job5] 00:19:31.011 filename=/dev/nvme4n1 00:19:31.011 [job6] 00:19:31.011 filename=/dev/nvme5n1 00:19:31.011 [job7] 00:19:31.011 filename=/dev/nvme6n1 00:19:31.011 [job8] 00:19:31.011 filename=/dev/nvme7n1 00:19:31.011 [job9] 00:19:31.011 filename=/dev/nvme8n1 00:19:31.011 [job10] 00:19:31.011 filename=/dev/nvme9n1 00:19:31.011 Could not set queue depth (nvme0n1) 00:19:31.011 Could not set queue depth (nvme10n1) 00:19:31.011 Could not set queue depth (nvme1n1) 00:19:31.011 Could not set queue depth (nvme2n1) 00:19:31.011 Could not set queue depth (nvme3n1) 00:19:31.011 Could not set queue depth (nvme4n1) 00:19:31.011 Could not set queue depth (nvme5n1) 00:19:31.011 Could not set queue depth (nvme6n1) 00:19:31.011 Could not set queue depth (nvme7n1) 00:19:31.011 Could not set queue depth (nvme8n1) 00:19:31.011 Could not set queue depth (nvme9n1) 00:19:31.011 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:31.011 fio-3.35 00:19:31.011 Starting 11 threads 00:19:40.987 00:19:40.987 job0: (groupid=0, jobs=1): err= 0: pid=79505: Fri Dec 13 09:21:33 2024 00:19:40.987 write: IOPS=189, BW=47.3MiB/s (49.6MB/s)(484MiB/10242msec); 0 zone resets 00:19:40.987 slat (usec): min=16, max=151421, avg=5163.76, stdev=9569.95 00:19:40.987 clat (msec): min=156, max=559, avg=333.21, stdev=31.10 00:19:40.987 lat (msec): min=156, max=559, avg=338.37, stdev=30.19 00:19:40.987 clat percentiles (msec): 00:19:40.987 | 1.00th=[ 226], 5.00th=[ 309], 10.00th=[ 313], 20.00th=[ 317], 00:19:40.987 | 30.00th=[ 330], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 334], 00:19:40.987 | 70.00th=[ 338], 80.00th=[ 342], 90.00th=[ 355], 95.00th=[ 376], 00:19:40.987 | 99.00th=[ 460], 99.50th=[ 523], 99.90th=[ 558], 99.95th=[ 558], 00:19:40.987 | 99.99th=[ 558] 00:19:40.987 bw ( KiB/s): min=36937, max=49664, per=6.86%, avg=47952.45, stdev=3072.89, samples=20 00:19:40.987 iops : min= 144, max= 194, avg=187.30, stdev=12.06, samples=20 00:19:40.987 lat (msec) : 250=1.29%, 500=97.99%, 750=0.72% 00:19:40.987 cpu : usr=0.26%, sys=0.63%, ctx=2032, majf=0, minf=1 00:19:40.987 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:19:40.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.987 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.987 issued rwts: total=0,1936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.987 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.987 job1: (groupid=0, jobs=1): err= 0: pid=79506: Fri Dec 13 09:21:33 2024 00:19:40.987 write: IOPS=207, BW=51.9MiB/s (54.4MB/s)(530MiB/10211msec); 0 zone resets 00:19:40.987 slat (usec): min=15, max=181586, avg=4719.66, stdev=9042.04 00:19:40.987 clat (msec): min=184, max=496, avg=303.41, stdev=23.29 00:19:40.987 lat (msec): min=184, max=496, avg=308.13, stdev=22.01 00:19:40.988 clat percentiles (msec): 00:19:40.988 | 1.00th=[ 230], 5.00th=[ 284], 10.00th=[ 288], 20.00th=[ 292], 00:19:40.988 | 30.00th=[ 300], 40.00th=[ 305], 50.00th=[ 305], 60.00th=[ 309], 00:19:40.988 | 70.00th=[ 309], 80.00th=[ 313], 90.00th=[ 313], 95.00th=[ 321], 00:19:40.988 | 99.00th=[ 405], 99.50th=[ 439], 99.90th=[ 477], 99.95th=[ 498], 00:19:40.988 | 99.99th=[ 498] 00:19:40.988 bw ( KiB/s): min=36790, max=55296, per=7.52%, avg=52629.90, stdev=3895.95, samples=20 00:19:40.988 iops : min= 143, max= 216, avg=205.55, stdev=15.37, samples=20 00:19:40.988 lat (msec) : 250=1.65%, 500=98.35% 00:19:40.988 cpu : usr=0.48%, sys=0.52%, ctx=1806, majf=0, minf=1 00:19:40.988 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:19:40.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.988 issued rwts: total=0,2120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.988 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.988 job2: (groupid=0, jobs=1): err= 0: pid=79518: Fri Dec 13 09:21:33 2024 00:19:40.988 write: IOPS=195, BW=48.8MiB/s (51.2MB/s)(500MiB/10235msec); 0 zone resets 00:19:40.988 slat (usec): min=18, max=63917, avg=4815.75, stdev=8836.54 00:19:40.988 clat (msec): min=65, max=560, avg=322.86, stdev=48.21 00:19:40.988 lat (msec): min=65, max=560, avg=327.68, stdev=48.55 00:19:40.988 clat percentiles (msec): 00:19:40.988 | 1.00th=[ 117], 5.00th=[ 234], 10.00th=[ 309], 20.00th=[ 313], 00:19:40.988 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 330], 60.00th=[ 334], 00:19:40.988 | 70.00th=[ 334], 80.00th=[ 338], 90.00th=[ 347], 95.00th=[ 372], 00:19:40.988 | 99.00th=[ 460], 99.50th=[ 523], 99.90th=[ 558], 99.95th=[ 558], 00:19:40.988 | 99.99th=[ 558] 00:19:40.988 bw ( KiB/s): min=43008, max=65024, per=7.08%, avg=49536.00, stdev=4112.39, samples=20 00:19:40.988 iops : min= 168, max= 254, avg=193.50, stdev=16.06, samples=20 00:19:40.988 lat (msec) : 100=0.60%, 250=4.95%, 500=93.74%, 750=0.70% 00:19:40.988 cpu : usr=0.34%, sys=0.64%, ctx=2181, majf=0, minf=1 00:19:40.988 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:19:40.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.988 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.988 issued rwts: total=0,1998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.988 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.988 job3: (groupid=0, jobs=1): err= 0: pid=79519: Fri Dec 13 09:21:33 2024 00:19:40.988 write: IOPS=211, BW=52.9MiB/s (55.4MB/s)(540MiB/10213msec); 0 zone resets 00:19:40.988 slat (usec): min=16, max=51432, avg=4627.13, stdev=8184.74 00:19:40.988 clat (msec): min=27, max=507, avg=297.85, stdev=38.01 00:19:40.988 lat (msec): min=27, max=507, avg=302.48, stdev=37.80 00:19:40.988 clat percentiles (msec): 00:19:40.988 | 1.00th=[ 84], 5.00th=[ 275], 10.00th=[ 284], 20.00th=[ 292], 00:19:40.988 | 30.00th=[ 300], 40.00th=[ 305], 50.00th=[ 305], 60.00th=[ 305], 00:19:40.988 | 70.00th=[ 309], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 313], 00:19:40.988 | 99.00th=[ 414], 99.50th=[ 451], 99.90th=[ 489], 99.95th=[ 506], 00:19:40.988 | 99.99th=[ 510] 00:19:40.988 bw ( KiB/s): min=51200, max=57458, per=7.68%, avg=53688.90, stdev=1274.77, samples=20 00:19:40.988 iops : min= 200, max= 224, avg=209.70, stdev= 4.91, samples=20 00:19:40.988 lat (msec) : 50=0.56%, 100=0.74%, 250=2.64%, 500=95.97%, 750=0.09% 00:19:40.988 cpu : usr=0.41%, sys=0.58%, ctx=1899, majf=0, minf=1 00:19:40.988 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:19:40.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.988 issued rwts: total=0,2160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.988 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.988 job4: (groupid=0, jobs=1): err= 0: pid=79520: Fri Dec 13 09:21:33 2024 00:19:40.988 write: IOPS=209, BW=52.4MiB/s (54.9MB/s)(535MiB/10210msec); 0 zone resets 00:19:40.988 slat (usec): min=17, max=100856, avg=4666.92, stdev=8415.91 00:19:40.988 clat (msec): min=102, max=511, avg=300.53, stdev=27.97 00:19:40.988 lat (msec): min=102, max=511, avg=305.20, stdev=27.22 00:19:40.988 clat percentiles (msec): 00:19:40.988 | 1.00th=[ 182], 5.00th=[ 279], 10.00th=[ 284], 20.00th=[ 288], 00:19:40.988 | 30.00th=[ 296], 40.00th=[ 305], 50.00th=[ 305], 60.00th=[ 305], 00:19:40.988 | 70.00th=[ 309], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 313], 00:19:40.988 | 99.00th=[ 418], 99.50th=[ 456], 99.90th=[ 493], 99.95th=[ 510], 00:19:40.988 | 99.99th=[ 510] 00:19:40.988 bw ( KiB/s): min=47104, max=55296, per=7.60%, avg=53171.20, stdev=1791.23, samples=20 00:19:40.988 iops : min= 184, max= 216, avg=207.70, stdev= 7.00, samples=20 00:19:40.988 lat (msec) : 250=2.43%, 500=97.48%, 750=0.09% 00:19:40.988 cpu : usr=0.38%, sys=0.64%, ctx=2165, majf=0, minf=1 00:19:40.988 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:19:40.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.988 issued rwts: total=0,2140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.988 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.988 job5: (groupid=0, jobs=1): err= 0: pid=79521: Fri Dec 13 09:21:33 2024 00:19:40.988 write: IOPS=211, BW=52.8MiB/s (55.4MB/s)(540MiB/10218msec); 0 zone resets 00:19:40.988 slat (usec): min=16, max=78465, avg=4629.33, stdev=8246.10 00:19:40.988 clat (msec): min=30, max=506, avg=297.95, stdev=37.10 00:19:40.988 lat (msec): min=30, max=506, avg=302.58, stdev=36.83 00:19:40.988 clat percentiles (msec): 00:19:40.988 | 1.00th=[ 87], 5.00th=[ 279], 10.00th=[ 284], 20.00th=[ 288], 00:19:40.988 | 30.00th=[ 296], 40.00th=[ 300], 50.00th=[ 305], 60.00th=[ 305], 00:19:40.988 | 70.00th=[ 309], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 317], 00:19:40.988 | 99.00th=[ 414], 99.50th=[ 451], 99.90th=[ 489], 99.95th=[ 506], 00:19:40.988 | 99.99th=[ 506] 00:19:40.988 bw ( KiB/s): min=51200, max=55296, per=7.67%, avg=53683.20, stdev=1421.96, samples=20 00:19:40.988 iops : min= 200, max= 216, avg=209.70, stdev= 5.55, samples=20 00:19:40.988 lat (msec) : 50=0.37%, 100=0.74%, 250=1.85%, 500=96.94%, 750=0.09% 00:19:40.988 cpu : usr=0.44%, sys=0.59%, ctx=2277, majf=0, minf=1 00:19:40.988 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:19:40.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.988 issued rwts: total=0,2160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.988 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.988 job6: (groupid=0, jobs=1): err= 0: pid=79522: Fri Dec 13 09:21:33 2024 00:19:40.988 write: IOPS=473, BW=118MiB/s (124MB/s)(1198MiB/10115msec); 0 zone resets 00:19:40.988 slat (usec): min=17, max=12077, avg=2081.38, stdev=3644.86 00:19:40.988 clat (msec): min=10, max=250, avg=132.93, stdev=26.16 00:19:40.988 lat (msec): min=10, max=250, avg=135.01, stdev=26.32 00:19:40.988 clat percentiles (msec): 00:19:40.988 | 1.00th=[ 61], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 134], 00:19:40.988 | 30.00th=[ 138], 40.00th=[ 142], 50.00th=[ 142], 60.00th=[ 144], 00:19:40.988 | 70.00th=[ 144], 80.00th=[ 146], 90.00th=[ 146], 95.00th=[ 148], 00:19:40.988 | 99.00th=[ 150], 99.50th=[ 199], 99.90th=[ 241], 99.95th=[ 243], 00:19:40.988 | 99.99th=[ 251] 00:19:40.988 bw ( KiB/s): min=112640, max=233939, per=17.31%, avg=121111.35, stdev=27002.26, samples=20 00:19:40.988 iops : min= 440, max= 913, avg=473.05, stdev=105.30, samples=20 00:19:40.988 lat (msec) : 20=0.17%, 50=0.58%, 100=11.04%, 250=88.17%, 500=0.04% 00:19:40.988 cpu : usr=0.74%, sys=1.42%, ctx=3800, majf=0, minf=1 00:19:40.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:40.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.988 issued rwts: total=0,4793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.988 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.988 job7: (groupid=0, jobs=1): err= 0: pid=79523: Fri Dec 13 09:21:33 2024 00:19:40.988 write: IOPS=474, BW=119MiB/s (124MB/s)(1201MiB/10118msec); 0 zone resets 00:19:40.988 slat (usec): min=14, max=10671, avg=2065.76, stdev=3633.06 00:19:40.988 clat (msec): min=9, max=251, avg=132.72, stdev=26.22 00:19:40.988 lat (msec): min=11, max=251, avg=134.78, stdev=26.40 00:19:40.988 clat percentiles (msec): 00:19:40.988 | 1.00th=[ 39], 5.00th=[ 73], 10.00th=[ 78], 20.00th=[ 134], 00:19:40.988 | 30.00th=[ 138], 40.00th=[ 142], 50.00th=[ 142], 60.00th=[ 144], 00:19:40.988 | 70.00th=[ 144], 80.00th=[ 144], 90.00th=[ 146], 95.00th=[ 148], 00:19:40.988 | 99.00th=[ 150], 99.50th=[ 199], 99.90th=[ 243], 99.95th=[ 243], 00:19:40.988 | 99.99th=[ 251] 00:19:40.988 bw ( KiB/s): min=112640, max=215552, per=17.35%, avg=121344.00, stdev=24350.97, samples=20 00:19:40.988 iops : min= 440, max= 842, avg=474.00, stdev=95.12, samples=20 00:19:40.988 lat (msec) : 10=0.02%, 20=0.29%, 50=1.12%, 100=11.20%, 250=87.32% 00:19:40.988 lat (msec) : 500=0.04% 00:19:40.988 cpu : usr=0.94%, sys=1.35%, ctx=5862, majf=0, minf=2 00:19:40.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:40.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.988 issued rwts: total=0,4803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.988 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.988 job8: (groupid=0, jobs=1): err= 0: pid=79524: Fri Dec 13 09:21:33 2024 00:19:40.988 write: IOPS=192, BW=48.0MiB/s (50.3MB/s)(492MiB/10248msec); 0 zone resets 00:19:40.988 slat (usec): min=18, max=33184, avg=5083.95, stdev=8956.24 00:19:40.988 clat (msec): min=36, max=557, avg=327.98, stdev=43.48 00:19:40.988 lat (msec): min=36, max=557, avg=333.06, stdev=43.31 00:19:40.988 clat percentiles (msec): 00:19:40.988 | 1.00th=[ 93], 5.00th=[ 300], 10.00th=[ 309], 20.00th=[ 313], 00:19:40.988 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 334], 00:19:40.988 | 70.00th=[ 334], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 376], 00:19:40.988 | 99.00th=[ 456], 99.50th=[ 518], 99.90th=[ 558], 99.95th=[ 558], 00:19:40.988 | 99.99th=[ 558] 00:19:40.988 bw ( KiB/s): min=43008, max=53248, per=6.97%, avg=48742.40, stdev=2410.67, samples=20 00:19:40.988 iops : min= 168, max= 208, avg=190.40, stdev= 9.42, samples=20 00:19:40.988 lat (msec) : 50=0.41%, 100=0.61%, 250=2.08%, 500=96.39%, 750=0.51% 00:19:40.988 cpu : usr=0.36%, sys=0.59%, ctx=2188, majf=0, minf=1 00:19:40.988 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:19:40.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.988 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.989 issued rwts: total=0,1968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.989 job9: (groupid=0, jobs=1): err= 0: pid=79525: Fri Dec 13 09:21:33 2024 00:19:40.989 write: IOPS=191, BW=47.9MiB/s (50.2MB/s)(490MiB/10235msec); 0 zone resets 00:19:40.989 slat (usec): min=15, max=211263, avg=4905.17, stdev=9948.76 00:19:40.989 clat (msec): min=97, max=548, avg=328.97, stdev=41.51 00:19:40.989 lat (msec): min=103, max=548, avg=333.88, stdev=41.41 00:19:40.989 clat percentiles (msec): 00:19:40.989 | 1.00th=[ 142], 5.00th=[ 271], 10.00th=[ 309], 20.00th=[ 313], 00:19:40.989 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 334], 00:19:40.989 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 380], 00:19:40.989 | 99.00th=[ 489], 99.50th=[ 510], 99.90th=[ 550], 99.95th=[ 550], 00:19:40.989 | 99.99th=[ 550] 00:19:40.989 bw ( KiB/s): min=30720, max=64000, per=6.94%, avg=48563.20, stdev=5828.89, samples=20 00:19:40.989 iops : min= 120, max= 250, avg=189.70, stdev=22.77, samples=20 00:19:40.989 lat (msec) : 100=0.05%, 250=3.98%, 500=95.31%, 750=0.66% 00:19:40.989 cpu : usr=0.32%, sys=0.61%, ctx=2164, majf=0, minf=1 00:19:40.989 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:19:40.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.989 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.989 issued rwts: total=0,1961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.989 job10: (groupid=0, jobs=1): err= 0: pid=79526: Fri Dec 13 09:21:33 2024 00:19:40.989 write: IOPS=191, BW=47.9MiB/s (50.2MB/s)(491MiB/10244msec); 0 zone resets 00:19:40.989 slat (usec): min=16, max=55668, avg=5096.34, stdev=9005.70 00:19:40.989 clat (msec): min=30, max=554, avg=328.71, stdev=43.53 00:19:40.989 lat (msec): min=31, max=554, avg=333.81, stdev=43.35 00:19:40.989 clat percentiles (msec): 00:19:40.989 | 1.00th=[ 105], 5.00th=[ 305], 10.00th=[ 309], 20.00th=[ 317], 00:19:40.989 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 334], 00:19:40.989 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 376], 00:19:40.989 | 99.00th=[ 451], 99.50th=[ 514], 99.90th=[ 558], 99.95th=[ 558], 00:19:40.989 | 99.99th=[ 558] 00:19:40.989 bw ( KiB/s): min=43008, max=51200, per=6.95%, avg=48619.25, stdev=1606.27, samples=20 00:19:40.989 iops : min= 168, max= 200, avg=189.90, stdev= 6.27, samples=20 00:19:40.989 lat (msec) : 50=0.36%, 100=0.61%, 250=2.09%, 500=96.43%, 750=0.51% 00:19:40.989 cpu : usr=0.37%, sys=0.56%, ctx=2010, majf=0, minf=1 00:19:40.989 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:19:40.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.989 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:40.989 issued rwts: total=0,1963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.989 00:19:40.989 Run status group 0 (all jobs): 00:19:40.989 WRITE: bw=683MiB/s (716MB/s), 47.3MiB/s-119MiB/s (49.6MB/s-124MB/s), io=7001MiB (7341MB), run=10115-10248msec 00:19:40.989 00:19:40.989 Disk stats (read/write): 00:19:40.989 nvme0n1: ios=50/3870, merge=0/0, ticks=59/1240695, in_queue=1240754, util=97.96% 00:19:40.989 nvme10n1: ios=49/4104, merge=0/0, ticks=59/1204575, in_queue=1204634, util=97.97% 00:19:40.989 nvme1n1: ios=44/3871, merge=0/0, ticks=52/1205087, in_queue=1205139, util=98.20% 00:19:40.989 nvme2n1: ios=26/4190, merge=0/0, ticks=40/1204767, in_queue=1204807, util=98.08% 00:19:40.989 nvme3n1: ios=29/4153, merge=0/0, ticks=51/1204918, in_queue=1204969, util=98.16% 00:19:40.989 nvme4n1: ios=0/4191, merge=0/0, ticks=0/1205636, in_queue=1205636, util=98.23% 00:19:40.989 nvme5n1: ios=0/9448, merge=0/0, ticks=0/1213214, in_queue=1213214, util=98.33% 00:19:40.989 nvme6n1: ios=0/9469, merge=0/0, ticks=0/1213894, in_queue=1213894, util=98.44% 00:19:40.989 nvme7n1: ios=0/3932, merge=0/0, ticks=0/1240934, in_queue=1240934, util=98.77% 00:19:40.989 nvme8n1: ios=0/3913, merge=0/0, ticks=0/1241405, in_queue=1241405, util=98.74% 00:19:40.989 nvme9n1: ios=0/3922, merge=0/0, ticks=0/1240653, in_queue=1240653, util=98.95% 00:19:40.989 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:19:40.989 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:40.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:40.989 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:40.989 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:40.989 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:40.989 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:40.989 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:40.990 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:40.990 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:40.990 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:40.990 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.990 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:41.249 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:41.249 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:41.249 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:41.249 rmmod nvme_tcp 00:19:41.249 rmmod nvme_fabrics 00:19:41.249 rmmod nvme_keyring 00:19:41.249 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 78839 ']' 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 78839 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 78839 ']' 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 78839 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78839 00:19:41.508 killing process with pid 78839 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78839' 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 78839 00:19:41.508 09:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 78839 00:19:44.111 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:44.111 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:44.111 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:44.111 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:19:44.111 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:19:44.111 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:19:44.112 00:19:44.112 real 0m52.226s 00:19:44.112 user 2m58.696s 00:19:44.112 sys 0m26.248s 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:44.112 ************************************ 00:19:44.112 END TEST nvmf_multiconnection 00:19:44.112 ************************************ 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:44.112 ************************************ 00:19:44.112 START TEST nvmf_initiator_timeout 00:19:44.112 ************************************ 00:19:44.112 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:44.374 * Looking for test storage... 00:19:44.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.374 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:44.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.375 --rc genhtml_branch_coverage=1 00:19:44.375 --rc genhtml_function_coverage=1 00:19:44.375 --rc genhtml_legend=1 00:19:44.375 --rc geninfo_all_blocks=1 00:19:44.375 --rc geninfo_unexecuted_blocks=1 00:19:44.375 00:19:44.375 ' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:44.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.375 --rc genhtml_branch_coverage=1 00:19:44.375 --rc genhtml_function_coverage=1 00:19:44.375 --rc genhtml_legend=1 00:19:44.375 --rc geninfo_all_blocks=1 00:19:44.375 --rc geninfo_unexecuted_blocks=1 00:19:44.375 00:19:44.375 ' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:44.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.375 --rc genhtml_branch_coverage=1 00:19:44.375 --rc genhtml_function_coverage=1 00:19:44.375 --rc genhtml_legend=1 00:19:44.375 --rc geninfo_all_blocks=1 00:19:44.375 --rc geninfo_unexecuted_blocks=1 00:19:44.375 00:19:44.375 ' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:44.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.375 --rc genhtml_branch_coverage=1 00:19:44.375 --rc genhtml_function_coverage=1 00:19:44.375 --rc genhtml_legend=1 00:19:44.375 --rc geninfo_all_blocks=1 00:19:44.375 --rc geninfo_unexecuted_blocks=1 00:19:44.375 00:19:44.375 ' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:44.375 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:44.375 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:44.376 Cannot find device "nvmf_init_br" 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:44.376 Cannot find device "nvmf_init_br2" 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:44.376 Cannot find device "nvmf_tgt_br" 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:44.376 Cannot find device "nvmf_tgt_br2" 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:44.376 Cannot find device "nvmf_init_br" 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:44.376 Cannot find device "nvmf_init_br2" 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:44.376 Cannot find device "nvmf_tgt_br" 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:44.376 Cannot find device "nvmf_tgt_br2" 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:44.376 Cannot find device "nvmf_br" 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:19:44.376 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:44.635 Cannot find device "nvmf_init_if" 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:44.635 Cannot find device "nvmf_init_if2" 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:44.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:44.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:44.635 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:44.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:44.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:19:44.895 00:19:44.895 --- 10.0.0.3 ping statistics --- 00:19:44.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.895 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:44.895 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:44.895 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:19:44.895 00:19:44.895 --- 10.0.0.4 ping statistics --- 00:19:44.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.895 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:44.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:44.895 00:19:44.895 --- 10.0.0.1 ping statistics --- 00:19:44.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.895 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:44.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:19:44.895 00:19:44.895 --- 10.0.0.2 ping statistics --- 00:19:44.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.895 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=79971 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 79971 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 79971 ']' 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.895 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:44.895 [2024-12-13 09:21:38.700822] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:44.895 [2024-12-13 09:21:38.700992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.154 [2024-12-13 09:21:38.882942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.154 [2024-12-13 09:21:38.980753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.154 [2024-12-13 09:21:38.980993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.154 [2024-12-13 09:21:38.981226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.154 [2024-12-13 09:21:38.981405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.154 [2024-12-13 09:21:38.981573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.154 [2024-12-13 09:21:38.983422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.154 [2024-12-13 09:21:38.983542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.154 [2024-12-13 09:21:38.983650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.154 [2024-12-13 09:21:38.983677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.412 [2024-12-13 09:21:39.158689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:45.979 Malloc0 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:45.979 Delay0 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:45.979 [2024-12-13 09:21:39.811197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:45.979 [2024-12-13 09:21:39.843600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.979 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:46.238 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:46.238 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:19:46.238 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:46.238 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:46.238 09:21:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:19:48.138 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:48.138 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:48.138 09:21:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:48.138 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:48.138 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:48.138 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:19:48.138 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=80036 00:19:48.138 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:48.138 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:48.138 [global] 00:19:48.138 thread=1 00:19:48.138 invalidate=1 00:19:48.138 rw=write 00:19:48.138 time_based=1 00:19:48.138 runtime=60 00:19:48.138 ioengine=libaio 00:19:48.138 direct=1 00:19:48.138 bs=4096 00:19:48.138 iodepth=1 00:19:48.138 norandommap=0 00:19:48.138 numjobs=1 00:19:48.138 00:19:48.138 verify_dump=1 00:19:48.138 verify_backlog=512 00:19:48.138 verify_state_save=0 00:19:48.138 do_verify=1 00:19:48.138 verify=crc32c-intel 00:19:48.397 [job0] 00:19:48.397 filename=/dev/nvme0n1 00:19:48.397 Could not set queue depth (nvme0n1) 00:19:48.397 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.397 fio-3.35 00:19:48.397 Starting 1 thread 00:19:51.679 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:51.679 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.679 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:51.679 true 00:19:51.679 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.679 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:51.679 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:51.680 true 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:51.680 true 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:51.680 true 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.680 09:21:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:54.210 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:54.210 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.210 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:54.210 true 00:19:54.210 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.210 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:54.210 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.210 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:54.210 true 00:19:54.210 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.211 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:54.211 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.211 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:54.211 true 00:19:54.211 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.211 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:54.211 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.211 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:54.211 true 00:19:54.211 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.211 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:54.211 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 80036 00:20:50.424 00:20:50.424 job0: (groupid=0, jobs=1): err= 0: pid=80062: Fri Dec 13 09:22:42 2024 00:20:50.424 read: IOPS=688, BW=2753KiB/s (2819kB/s)(161MiB/60000msec) 00:20:50.424 slat (usec): min=11, max=13617, avg=15.04, stdev=75.96 00:20:50.424 clat (usec): min=192, max=40565k, avg=1223.50, stdev=199614.44 00:20:50.424 lat (usec): min=205, max=40565k, avg=1238.53, stdev=199614.48 00:20:50.424 clat percentiles (usec): 00:20:50.424 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:20:50.424 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:20:50.424 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 289], 00:20:50.424 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 375], 99.95th=[ 529], 00:20:50.424 | 99.99th=[ 848] 00:20:50.424 write: IOPS=691, BW=2765KiB/s (2831kB/s)(162MiB/60000msec); 0 zone resets 00:20:50.424 slat (usec): min=13, max=743, avg=21.35, stdev= 7.35 00:20:50.424 clat (usec): min=111, max=2110, avg=188.87, stdev=28.47 00:20:50.424 lat (usec): min=159, max=2129, avg=210.22, stdev=30.24 00:20:50.424 clat percentiles (usec): 00:20:50.424 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 167], 00:20:50.424 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 194], 00:20:50.424 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 237], 00:20:50.424 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 310], 99.95th=[ 429], 00:20:50.424 | 99.99th=[ 660] 00:20:50.424 bw ( KiB/s): min= 2368, max= 9896, per=100.00%, avg=8297.03, stdev=1244.36, samples=39 00:20:50.424 iops : min= 592, max= 2474, avg=2074.26, stdev=311.09, samples=39 00:20:50.424 lat (usec) : 250=83.84%, 500=16.12%, 750=0.03%, 1000=0.01% 00:20:50.424 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:50.424 cpu : usr=0.49%, sys=2.00%, ctx=82775, majf=0, minf=5 00:20:50.424 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:50.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:50.424 issued rwts: total=41297,41472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:50.424 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:50.424 00:20:50.424 Run status group 0 (all jobs): 00:20:50.424 READ: bw=2753KiB/s (2819kB/s), 2753KiB/s-2753KiB/s (2819kB/s-2819kB/s), io=161MiB (169MB), run=60000-60000msec 00:20:50.424 WRITE: bw=2765KiB/s (2831kB/s), 2765KiB/s-2765KiB/s (2831kB/s-2831kB/s), io=162MiB (170MB), run=60000-60000msec 00:20:50.424 00:20:50.424 Disk stats (read/write): 00:20:50.424 nvme0n1: ios=41195/41366, merge=0/0, ticks=10480/8422, in_queue=18902, util=99.62% 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:50.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:50.424 nvmf hotplug test: fio successful as expected 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.424 rmmod nvme_tcp 00:20:50.424 rmmod nvme_fabrics 00:20:50.424 rmmod nvme_keyring 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 79971 ']' 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 79971 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 79971 ']' 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 79971 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79971 00:20:50.424 killing process with pid 79971 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79971' 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 79971 00:20:50.424 09:22:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 79971 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:50.424 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:20:50.425 00:20:50.425 real 1m5.812s 00:20:50.425 user 3m56.281s 00:20:50.425 sys 0m21.204s 00:20:50.425 ************************************ 00:20:50.425 END TEST nvmf_initiator_timeout 00:20:50.425 ************************************ 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.425 ************************************ 00:20:50.425 START TEST nvmf_nsid 00:20:50.425 ************************************ 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:50.425 * Looking for test storage... 00:20:50.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:50.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.425 --rc genhtml_branch_coverage=1 00:20:50.425 --rc genhtml_function_coverage=1 00:20:50.425 --rc genhtml_legend=1 00:20:50.425 --rc geninfo_all_blocks=1 00:20:50.425 --rc geninfo_unexecuted_blocks=1 00:20:50.425 00:20:50.425 ' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:50.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.425 --rc genhtml_branch_coverage=1 00:20:50.425 --rc genhtml_function_coverage=1 00:20:50.425 --rc genhtml_legend=1 00:20:50.425 --rc geninfo_all_blocks=1 00:20:50.425 --rc geninfo_unexecuted_blocks=1 00:20:50.425 00:20:50.425 ' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:50.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.425 --rc genhtml_branch_coverage=1 00:20:50.425 --rc genhtml_function_coverage=1 00:20:50.425 --rc genhtml_legend=1 00:20:50.425 --rc geninfo_all_blocks=1 00:20:50.425 --rc geninfo_unexecuted_blocks=1 00:20:50.425 00:20:50.425 ' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:50.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.425 --rc genhtml_branch_coverage=1 00:20:50.425 --rc genhtml_function_coverage=1 00:20:50.425 --rc genhtml_legend=1 00:20:50.425 --rc geninfo_all_blocks=1 00:20:50.425 --rc geninfo_unexecuted_blocks=1 00:20:50.425 00:20:50.425 ' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.425 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:50.426 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.426 09:22:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:50.426 Cannot find device "nvmf_init_br" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:50.426 Cannot find device "nvmf_init_br2" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:50.426 Cannot find device "nvmf_tgt_br" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.426 Cannot find device "nvmf_tgt_br2" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:50.426 Cannot find device "nvmf_init_br" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:50.426 Cannot find device "nvmf_init_br2" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:50.426 Cannot find device "nvmf_tgt_br" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:50.426 Cannot find device "nvmf_tgt_br2" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:50.426 Cannot find device "nvmf_br" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:50.426 Cannot find device "nvmf_init_if" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:50.426 Cannot find device "nvmf_init_if2" 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:50.426 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:50.685 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:50.685 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:50.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:20:50.686 00:20:50.686 --- 10.0.0.3 ping statistics --- 00:20:50.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.686 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:50.686 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:50.686 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:20:50.686 00:20:50.686 --- 10.0.0.4 ping statistics --- 00:20:50.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.686 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:50.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:50.686 00:20:50.686 --- 10.0.0.1 ping statistics --- 00:20:50.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.686 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:50.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:50.686 00:20:50.686 --- 10.0.0.2 ping statistics --- 00:20:50.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.686 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=80931 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 80931 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:50.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 80931 ']' 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.686 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:50.945 [2024-12-13 09:22:44.587002] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:50.945 [2024-12-13 09:22:44.587390] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.945 [2024-12-13 09:22:44.774744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.204 [2024-12-13 09:22:44.898721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.204 [2024-12-13 09:22:44.899049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.204 [2024-12-13 09:22:44.899215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.204 [2024-12-13 09:22:44.899255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.204 [2024-12-13 09:22:44.899274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.204 [2024-12-13 09:22:44.900723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.463 [2024-12-13 09:22:45.108269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:51.723 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.724 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:51.724 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.724 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.724 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:51.724 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.724 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:51.724 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=80963 00:20:51.724 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:20:51.724 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:51.724 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=643fc624-1a4f-413c-8280-fb11899769f4 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3d4012e1-27f7-4b29-9863-b8949d55d861 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=eb218b1b-93c2-4a78-a7b0-896101153b7f 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:52.011 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.012 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:52.012 null0 00:20:52.012 null1 00:20:52.012 null2 00:20:52.012 [2024-12-13 09:22:45.662071] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.012 [2024-12-13 09:22:45.686324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:52.012 [2024-12-13 09:22:45.715039] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:52.012 [2024-12-13 09:22:45.715426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80963 ] 00:20:52.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:52.012 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.012 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 80963 /var/tmp/tgt2.sock 00:20:52.012 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 80963 ']' 00:20:52.012 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:52.012 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.012 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:52.012 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.012 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:52.289 [2024-12-13 09:22:45.890184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.289 [2024-12-13 09:22:46.016403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.547 [2024-12-13 09:22:46.253679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:53.116 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.116 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:53.116 09:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:53.375 [2024-12-13 09:22:47.109604] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.375 [2024-12-13 09:22:47.125751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:53.375 nvme0n1 nvme0n2 00:20:53.375 nvme1n1 00:20:53.375 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:53.375 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:53.375 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:53.634 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 643fc624-1a4f-413c-8280-fb11899769f4 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=643fc6241a4f413c8280fb11899769f4 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 643FC6241A4F413C8280FB11899769F4 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 643FC6241A4F413C8280FB11899769F4 == \6\4\3\F\C\6\2\4\1\A\4\F\4\1\3\C\8\2\8\0\F\B\1\1\8\9\9\7\6\9\F\4 ]] 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3d4012e1-27f7-4b29-9863-b8949d55d861 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:54.572 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3d4012e127f74b299863b8949d55d861 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3D4012E127F74B299863B8949D55D861 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3D4012E127F74B299863B8949D55D861 == \3\D\4\0\1\2\E\1\2\7\F\7\4\B\2\9\9\8\6\3\B\8\9\4\9\D\5\5\D\8\6\1 ]] 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid eb218b1b-93c2-4a78-a7b0-896101153b7f 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=eb218b1b93c24a78a7b0896101153b7f 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EB218B1B93C24A78A7B0896101153B7F 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ EB218B1B93C24A78A7B0896101153B7F == \E\B\2\1\8\B\1\B\9\3\C\2\4\A\7\8\A\7\B\0\8\9\6\1\0\1\1\5\3\B\7\F ]] 00:20:54.831 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 80963 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 80963 ']' 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 80963 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80963 00:20:55.093 killing process with pid 80963 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80963' 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 80963 00:20:55.093 09:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 80963 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:56.999 rmmod nvme_tcp 00:20:56.999 rmmod nvme_fabrics 00:20:56.999 rmmod nvme_keyring 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 80931 ']' 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 80931 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 80931 ']' 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 80931 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80931 00:20:56.999 killing process with pid 80931 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80931' 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 80931 00:20:56.999 09:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 80931 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:57.568 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:20:57.827 00:20:57.827 real 0m7.873s 00:20:57.827 user 0m12.142s 00:20:57.827 sys 0m1.800s 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:57.827 ************************************ 00:20:57.827 END TEST nvmf_nsid 00:20:57.827 ************************************ 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:57.827 00:20:57.827 real 7m43.235s 00:20:57.827 user 18m44.137s 00:20:57.827 sys 1m55.247s 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.827 09:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:57.827 ************************************ 00:20:57.827 END TEST nvmf_target_extra 00:20:57.827 ************************************ 00:20:58.086 09:22:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:58.086 09:22:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:58.086 09:22:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.086 09:22:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:58.086 ************************************ 00:20:58.086 START TEST nvmf_host 00:20:58.086 ************************************ 00:20:58.086 09:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:58.086 * Looking for test storage... 00:20:58.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:58.086 09:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:58.086 09:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:20:58.086 09:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:58.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.346 --rc genhtml_branch_coverage=1 00:20:58.346 --rc genhtml_function_coverage=1 00:20:58.346 --rc genhtml_legend=1 00:20:58.346 --rc geninfo_all_blocks=1 00:20:58.346 --rc geninfo_unexecuted_blocks=1 00:20:58.346 00:20:58.346 ' 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:58.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.346 --rc genhtml_branch_coverage=1 00:20:58.346 --rc genhtml_function_coverage=1 00:20:58.346 --rc genhtml_legend=1 00:20:58.346 --rc geninfo_all_blocks=1 00:20:58.346 --rc geninfo_unexecuted_blocks=1 00:20:58.346 00:20:58.346 ' 00:20:58.346 09:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:58.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.346 --rc genhtml_branch_coverage=1 00:20:58.347 --rc genhtml_function_coverage=1 00:20:58.347 --rc genhtml_legend=1 00:20:58.347 --rc geninfo_all_blocks=1 00:20:58.347 --rc geninfo_unexecuted_blocks=1 00:20:58.347 00:20:58.347 ' 00:20:58.347 09:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:58.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.347 --rc genhtml_branch_coverage=1 00:20:58.347 --rc genhtml_function_coverage=1 00:20:58.347 --rc genhtml_legend=1 00:20:58.347 --rc geninfo_all_blocks=1 00:20:58.347 --rc geninfo_unexecuted_blocks=1 00:20:58.347 00:20:58.347 ' 00:20:58.347 09:22:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:58.347 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.347 ************************************ 00:20:58.347 START TEST nvmf_identify 00:20:58.347 ************************************ 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:58.347 * Looking for test storage... 00:20:58.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:58.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.347 --rc genhtml_branch_coverage=1 00:20:58.347 --rc genhtml_function_coverage=1 00:20:58.347 --rc genhtml_legend=1 00:20:58.347 --rc geninfo_all_blocks=1 00:20:58.347 --rc geninfo_unexecuted_blocks=1 00:20:58.347 00:20:58.347 ' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:58.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.347 --rc genhtml_branch_coverage=1 00:20:58.347 --rc genhtml_function_coverage=1 00:20:58.347 --rc genhtml_legend=1 00:20:58.347 --rc geninfo_all_blocks=1 00:20:58.347 --rc geninfo_unexecuted_blocks=1 00:20:58.347 00:20:58.347 ' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:58.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.347 --rc genhtml_branch_coverage=1 00:20:58.347 --rc genhtml_function_coverage=1 00:20:58.347 --rc genhtml_legend=1 00:20:58.347 --rc geninfo_all_blocks=1 00:20:58.347 --rc geninfo_unexecuted_blocks=1 00:20:58.347 00:20:58.347 ' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:58.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.347 --rc genhtml_branch_coverage=1 00:20:58.347 --rc genhtml_function_coverage=1 00:20:58.347 --rc genhtml_legend=1 00:20:58.347 --rc geninfo_all_blocks=1 00:20:58.347 --rc geninfo_unexecuted_blocks=1 00:20:58.347 00:20:58.347 ' 00:20:58.347 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.348 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.606 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:58.607 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:58.607 Cannot find device "nvmf_init_br" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:58.607 Cannot find device "nvmf_init_br2" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:58.607 Cannot find device "nvmf_tgt_br" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:58.607 Cannot find device "nvmf_tgt_br2" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:58.607 Cannot find device "nvmf_init_br" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:58.607 Cannot find device "nvmf_init_br2" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:58.607 Cannot find device "nvmf_tgt_br" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:58.607 Cannot find device "nvmf_tgt_br2" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:58.607 Cannot find device "nvmf_br" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:58.607 Cannot find device "nvmf_init_if" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:58.607 Cannot find device "nvmf_init_if2" 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:58.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:58.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:58.607 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:58.866 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:58.867 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:58.867 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:20:58.867 00:20:58.867 --- 10.0.0.3 ping statistics --- 00:20:58.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.867 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:58.867 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:58.867 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:20:58.867 00:20:58.867 --- 10.0.0.4 ping statistics --- 00:20:58.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.867 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:58.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:58.867 00:20:58.867 --- 10.0.0.1 ping statistics --- 00:20:58.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.867 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:58.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:20:58.867 00:20:58.867 --- 10.0.0.2 ping statistics --- 00:20:58.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.867 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=81344 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 81344 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 81344 ']' 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.867 09:22:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:59.126 [2024-12-13 09:22:52.841010] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:59.126 [2024-12-13 09:22:52.841193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.385 [2024-12-13 09:22:53.023209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.385 [2024-12-13 09:22:53.106897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.385 [2024-12-13 09:22:53.106971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.385 [2024-12-13 09:22:53.107004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.385 [2024-12-13 09:22:53.107016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.385 [2024-12-13 09:22:53.107028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.385 [2024-12-13 09:22:53.108645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.385 [2024-12-13 09:22:53.108794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.385 [2024-12-13 09:22:53.108959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.385 [2024-12-13 09:22:53.109549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.385 [2024-12-13 09:22:53.268985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:59.954 [2024-12-13 09:22:53.739972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.954 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:00.213 Malloc0 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:00.213 [2024-12-13 09:22:53.890176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:00.213 [ 00:21:00.213 { 00:21:00.213 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:00.213 "subtype": "Discovery", 00:21:00.213 "listen_addresses": [ 00:21:00.213 { 00:21:00.213 "trtype": "TCP", 00:21:00.213 "adrfam": "IPv4", 00:21:00.213 "traddr": "10.0.0.3", 00:21:00.213 "trsvcid": "4420" 00:21:00.213 } 00:21:00.213 ], 00:21:00.213 "allow_any_host": true, 00:21:00.213 "hosts": [] 00:21:00.213 }, 00:21:00.213 { 00:21:00.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.213 "subtype": "NVMe", 00:21:00.213 "listen_addresses": [ 00:21:00.213 { 00:21:00.213 "trtype": "TCP", 00:21:00.213 "adrfam": "IPv4", 00:21:00.213 "traddr": "10.0.0.3", 00:21:00.213 "trsvcid": "4420" 00:21:00.213 } 00:21:00.213 ], 00:21:00.213 "allow_any_host": true, 00:21:00.213 "hosts": [], 00:21:00.213 "serial_number": "SPDK00000000000001", 00:21:00.213 "model_number": "SPDK bdev Controller", 00:21:00.213 "max_namespaces": 32, 00:21:00.213 "min_cntlid": 1, 00:21:00.213 "max_cntlid": 65519, 00:21:00.213 "namespaces": [ 00:21:00.213 { 00:21:00.213 "nsid": 1, 00:21:00.213 "bdev_name": "Malloc0", 00:21:00.213 "name": "Malloc0", 00:21:00.213 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:00.213 "eui64": "ABCDEF0123456789", 00:21:00.213 "uuid": "1342e938-15fa-4bcf-b1d5-b132ade61504" 00:21:00.213 } 00:21:00.213 ] 00:21:00.213 } 00:21:00.213 ] 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.213 09:22:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:00.213 [2024-12-13 09:22:53.968849] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:00.213 [2024-12-13 09:22:53.968957] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81379 ] 00:21:00.475 [2024-12-13 09:22:54.143887] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:00.475 [2024-12-13 09:22:54.144026] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:00.475 [2024-12-13 09:22:54.144044] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:00.475 [2024-12-13 09:22:54.144068] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:00.475 [2024-12-13 09:22:54.144083] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:00.475 [2024-12-13 09:22:54.144564] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:00.475 [2024-12-13 09:22:54.144656] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:21:00.475 [2024-12-13 09:22:54.161380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:00.475 [2024-12-13 09:22:54.161429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:00.475 [2024-12-13 09:22:54.161440] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:00.475 [2024-12-13 09:22:54.161447] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:00.475 [2024-12-13 09:22:54.161534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.161550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.161558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.475 [2024-12-13 09:22:54.161583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:00.475 [2024-12-13 09:22:54.161624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.475 [2024-12-13 09:22:54.169352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.475 [2024-12-13 09:22:54.169400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.475 [2024-12-13 09:22:54.169410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.169419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.475 [2024-12-13 09:22:54.169448] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:00.475 [2024-12-13 09:22:54.169465] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:00.475 [2024-12-13 09:22:54.169476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:00.475 [2024-12-13 09:22:54.169499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.169509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.169516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.475 [2024-12-13 09:22:54.169532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.475 [2024-12-13 09:22:54.169569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.475 [2024-12-13 09:22:54.169693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.475 [2024-12-13 09:22:54.169724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.475 [2024-12-13 09:22:54.169731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.169739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.475 [2024-12-13 09:22:54.169750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:00.475 [2024-12-13 09:22:54.169763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:00.475 [2024-12-13 09:22:54.169776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.169783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.169794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.475 [2024-12-13 09:22:54.169811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.475 [2024-12-13 09:22:54.169844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.475 [2024-12-13 09:22:54.169937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.475 [2024-12-13 09:22:54.169952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.475 [2024-12-13 09:22:54.169959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.169966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.475 [2024-12-13 09:22:54.169976] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:00.475 [2024-12-13 09:22:54.169990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:00.475 [2024-12-13 09:22:54.170002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.170010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.170017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.475 [2024-12-13 09:22:54.170030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.475 [2024-12-13 09:22:54.170058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.475 [2024-12-13 09:22:54.170143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.475 [2024-12-13 09:22:54.170155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.475 [2024-12-13 09:22:54.170161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.170168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.475 [2024-12-13 09:22:54.170178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:00.475 [2024-12-13 09:22:54.170204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.170215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.475 [2024-12-13 09:22:54.170222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.475 [2024-12-13 09:22:54.170236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.475 [2024-12-13 09:22:54.170263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.475 [2024-12-13 09:22:54.170389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.475 [2024-12-13 09:22:54.170405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.476 [2024-12-13 09:22:54.170412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.170419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.476 [2024-12-13 09:22:54.170429] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:00.476 [2024-12-13 09:22:54.170439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:00.476 [2024-12-13 09:22:54.170452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:00.476 [2024-12-13 09:22:54.170561] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:00.476 [2024-12-13 09:22:54.170579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:00.476 [2024-12-13 09:22:54.170595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.170603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.170619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.476 [2024-12-13 09:22:54.170634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.476 [2024-12-13 09:22:54.170679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.476 [2024-12-13 09:22:54.170767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.476 [2024-12-13 09:22:54.170783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.476 [2024-12-13 09:22:54.170789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.170796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.476 [2024-12-13 09:22:54.170806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:00.476 [2024-12-13 09:22:54.170865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.170875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.170882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.476 [2024-12-13 09:22:54.170896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.476 [2024-12-13 09:22:54.170926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.476 [2024-12-13 09:22:54.171003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.476 [2024-12-13 09:22:54.171016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.476 [2024-12-13 09:22:54.171022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.476 [2024-12-13 09:22:54.171039] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:00.476 [2024-12-13 09:22:54.171053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:00.476 [2024-12-13 09:22:54.171079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:00.476 [2024-12-13 09:22:54.171098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:00.476 [2024-12-13 09:22:54.171125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.476 [2024-12-13 09:22:54.171176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.476 [2024-12-13 09:22:54.171228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.476 [2024-12-13 09:22:54.171390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.476 [2024-12-13 09:22:54.171405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.476 [2024-12-13 09:22:54.171411] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171419] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:21:00.476 [2024-12-13 09:22:54.171427] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:00.476 [2024-12-13 09:22:54.171435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171453] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171461] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.476 [2024-12-13 09:22:54.171489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.476 [2024-12-13 09:22:54.171495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.476 [2024-12-13 09:22:54.171519] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:00.476 [2024-12-13 09:22:54.171529] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:00.476 [2024-12-13 09:22:54.171537] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:00.476 [2024-12-13 09:22:54.171545] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:00.476 [2024-12-13 09:22:54.171553] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:00.476 [2024-12-13 09:22:54.171563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:00.476 [2024-12-13 09:22:54.171577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:00.476 [2024-12-13 09:22:54.171594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.476 [2024-12-13 09:22:54.171646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:00.476 [2024-12-13 09:22:54.171691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.476 [2024-12-13 09:22:54.171780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.476 [2024-12-13 09:22:54.171795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.476 [2024-12-13 09:22:54.171803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.476 [2024-12-13 09:22:54.171828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.476 [2024-12-13 09:22:54.171864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.476 [2024-12-13 09:22:54.171875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:21:00.476 [2024-12-13 09:22:54.171897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.476 [2024-12-13 09:22:54.171906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:21:00.476 [2024-12-13 09:22:54.171931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.476 [2024-12-13 09:22:54.171939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.171951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.476 [2024-12-13 09:22:54.171961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.476 [2024-12-13 09:22:54.171969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:00.476 [2024-12-13 09:22:54.171987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:00.476 [2024-12-13 09:22:54.171998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.172004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.476 [2024-12-13 09:22:54.172016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.476 [2024-12-13 09:22:54.172050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.476 [2024-12-13 09:22:54.172063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:21:00.476 [2024-12-13 09:22:54.172070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:21:00.476 [2024-12-13 09:22:54.172077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.476 [2024-12-13 09:22:54.172083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.476 [2024-12-13 09:22:54.172223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.476 [2024-12-13 09:22:54.172235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.476 [2024-12-13 09:22:54.172241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.172248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.476 [2024-12-13 09:22:54.172261] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:00.476 [2024-12-13 09:22:54.172272] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:00.476 [2024-12-13 09:22:54.172296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.476 [2024-12-13 09:22:54.172318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.477 [2024-12-13 09:22:54.172335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.477 [2024-12-13 09:22:54.172369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.477 [2024-12-13 09:22:54.172481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.477 [2024-12-13 09:22:54.172493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.477 [2024-12-13 09:22:54.172500] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.172507] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:21:00.477 [2024-12-13 09:22:54.172514] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:00.477 [2024-12-13 09:22:54.172522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.172534] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.172545] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.172558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.477 [2024-12-13 09:22:54.172571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.477 [2024-12-13 09:22:54.172578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.172589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.477 [2024-12-13 09:22:54.172614] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:00.477 [2024-12-13 09:22:54.172667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.172679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.477 [2024-12-13 09:22:54.172693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.477 [2024-12-13 09:22:54.172704] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.172718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.172726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:00.477 [2024-12-13 09:22:54.172740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.477 [2024-12-13 09:22:54.172774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.477 [2024-12-13 09:22:54.172791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:00.477 [2024-12-13 09:22:54.173081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.477 [2024-12-13 09:22:54.173104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.477 [2024-12-13 09:22:54.173112] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.173119] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:21:00.477 [2024-12-13 09:22:54.173127] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:21:00.477 [2024-12-13 09:22:54.173135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.173146] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.173153] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.173162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.477 [2024-12-13 09:22:54.173175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.477 [2024-12-13 09:22:54.173182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.173189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:00.477 [2024-12-13 09:22:54.173216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.477 [2024-12-13 09:22:54.173228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.477 [2024-12-13 09:22:54.173234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.173240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.477 [2024-12-13 09:22:54.173272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.477 [2024-12-13 09:22:54.177350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.477 [2024-12-13 09:22:54.177402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.477 [2024-12-13 09:22:54.177511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.477 [2024-12-13 09:22:54.177524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.477 [2024-12-13 09:22:54.177530] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177536] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:21:00.477 [2024-12-13 09:22:54.177543] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:21:00.477 [2024-12-13 09:22:54.177566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177577] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177590] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.477 [2024-12-13 09:22:54.177616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.477 [2024-12-13 09:22:54.177623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.477 [2024-12-13 09:22:54.177652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.477 [2024-12-13 09:22:54.177675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.477 [2024-12-13 09:22:54.177712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.477 [2024-12-13 09:22:54.177848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.477 [2024-12-13 09:22:54.177861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.477 [2024-12-13 09:22:54.177867] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177874] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:21:00.477 [2024-12-13 09:22:54.177881] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:21:00.477 [2024-12-13 09:22:54.177887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177898] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177904] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.477 [2024-12-13 09:22:54.177944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.477 [2024-12-13 09:22:54.177950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.477 [2024-12-13 09:22:54.177956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.477 ===================================================== 00:21:00.477 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:00.477 ===================================================== 00:21:00.477 Controller Capabilities/Features 00:21:00.477 ================================ 00:21:00.477 Vendor ID: 0000 00:21:00.477 Subsystem Vendor ID: 0000 00:21:00.477 Serial Number: .................... 00:21:00.477 Model Number: ........................................ 00:21:00.477 Firmware Version: 25.01 00:21:00.477 Recommended Arb Burst: 0 00:21:00.477 IEEE OUI Identifier: 00 00 00 00:21:00.477 Multi-path I/O 00:21:00.477 May have multiple subsystem ports: No 00:21:00.477 May have multiple controllers: No 00:21:00.477 Associated with SR-IOV VF: No 00:21:00.477 Max Data Transfer Size: 131072 00:21:00.477 Max Number of Namespaces: 0 00:21:00.477 Max Number of I/O Queues: 1024 00:21:00.477 NVMe Specification Version (VS): 1.3 00:21:00.477 NVMe Specification Version (Identify): 1.3 00:21:00.477 Maximum Queue Entries: 128 00:21:00.477 Contiguous Queues Required: Yes 00:21:00.477 Arbitration Mechanisms Supported 00:21:00.477 Weighted Round Robin: Not Supported 00:21:00.477 Vendor Specific: Not Supported 00:21:00.477 Reset Timeout: 15000 ms 00:21:00.477 Doorbell Stride: 4 bytes 00:21:00.477 NVM Subsystem Reset: Not Supported 00:21:00.477 Command Sets Supported 00:21:00.477 NVM Command Set: Supported 00:21:00.477 Boot Partition: Not Supported 00:21:00.477 Memory Page Size Minimum: 4096 bytes 00:21:00.477 Memory Page Size Maximum: 4096 bytes 00:21:00.477 Persistent Memory Region: Not Supported 00:21:00.477 Optional Asynchronous Events Supported 00:21:00.477 Namespace Attribute Notices: Not Supported 00:21:00.477 Firmware Activation Notices: Not Supported 00:21:00.477 ANA Change Notices: Not Supported 00:21:00.477 PLE Aggregate Log Change Notices: Not Supported 00:21:00.477 LBA Status Info Alert Notices: Not Supported 00:21:00.477 EGE Aggregate Log Change Notices: Not Supported 00:21:00.477 Normal NVM Subsystem Shutdown event: Not Supported 00:21:00.477 Zone Descriptor Change Notices: Not Supported 00:21:00.477 Discovery Log Change Notices: Supported 00:21:00.477 Controller Attributes 00:21:00.477 128-bit Host Identifier: Not Supported 00:21:00.477 Non-Operational Permissive Mode: Not Supported 00:21:00.477 NVM Sets: Not Supported 00:21:00.477 Read Recovery Levels: Not Supported 00:21:00.477 Endurance Groups: Not Supported 00:21:00.477 Predictable Latency Mode: Not Supported 00:21:00.477 Traffic Based Keep ALive: Not Supported 00:21:00.477 Namespace Granularity: Not Supported 00:21:00.477 SQ Associations: Not Supported 00:21:00.477 UUID List: Not Supported 00:21:00.477 Multi-Domain Subsystem: Not Supported 00:21:00.477 Fixed Capacity Management: Not Supported 00:21:00.477 Variable Capacity Management: Not Supported 00:21:00.477 Delete Endurance Group: Not Supported 00:21:00.478 Delete NVM Set: Not Supported 00:21:00.478 Extended LBA Formats Supported: Not Supported 00:21:00.478 Flexible Data Placement Supported: Not Supported 00:21:00.478 00:21:00.478 Controller Memory Buffer Support 00:21:00.478 ================================ 00:21:00.478 Supported: No 00:21:00.478 00:21:00.478 Persistent Memory Region Support 00:21:00.478 ================================ 00:21:00.478 Supported: No 00:21:00.478 00:21:00.478 Admin Command Set Attributes 00:21:00.478 ============================ 00:21:00.478 Security Send/Receive: Not Supported 00:21:00.478 Format NVM: Not Supported 00:21:00.478 Firmware Activate/Download: Not Supported 00:21:00.478 Namespace Management: Not Supported 00:21:00.478 Device Self-Test: Not Supported 00:21:00.478 Directives: Not Supported 00:21:00.478 NVMe-MI: Not Supported 00:21:00.478 Virtualization Management: Not Supported 00:21:00.478 Doorbell Buffer Config: Not Supported 00:21:00.478 Get LBA Status Capability: Not Supported 00:21:00.478 Command & Feature Lockdown Capability: Not Supported 00:21:00.478 Abort Command Limit: 1 00:21:00.478 Async Event Request Limit: 4 00:21:00.478 Number of Firmware Slots: N/A 00:21:00.478 Firmware Slot 1 Read-Only: N/A 00:21:00.478 Firmware Activation Without Reset: N/A 00:21:00.478 Multiple Update Detection Support: N/A 00:21:00.478 Firmware Update Granularity: No Information Provided 00:21:00.478 Per-Namespace SMART Log: No 00:21:00.478 Asymmetric Namespace Access Log Page: Not Supported 00:21:00.478 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:00.478 Command Effects Log Page: Not Supported 00:21:00.478 Get Log Page Extended Data: Supported 00:21:00.478 Telemetry Log Pages: Not Supported 00:21:00.478 Persistent Event Log Pages: Not Supported 00:21:00.478 Supported Log Pages Log Page: May Support 00:21:00.478 Commands Supported & Effects Log Page: Not Supported 00:21:00.478 Feature Identifiers & Effects Log Page:May Support 00:21:00.478 NVMe-MI Commands & Effects Log Page: May Support 00:21:00.478 Data Area 4 for Telemetry Log: Not Supported 00:21:00.478 Error Log Page Entries Supported: 128 00:21:00.478 Keep Alive: Not Supported 00:21:00.478 00:21:00.478 NVM Command Set Attributes 00:21:00.478 ========================== 00:21:00.478 Submission Queue Entry Size 00:21:00.478 Max: 1 00:21:00.478 Min: 1 00:21:00.478 Completion Queue Entry Size 00:21:00.478 Max: 1 00:21:00.478 Min: 1 00:21:00.478 Number of Namespaces: 0 00:21:00.478 Compare Command: Not Supported 00:21:00.478 Write Uncorrectable Command: Not Supported 00:21:00.478 Dataset Management Command: Not Supported 00:21:00.478 Write Zeroes Command: Not Supported 00:21:00.478 Set Features Save Field: Not Supported 00:21:00.478 Reservations: Not Supported 00:21:00.478 Timestamp: Not Supported 00:21:00.478 Copy: Not Supported 00:21:00.478 Volatile Write Cache: Not Present 00:21:00.478 Atomic Write Unit (Normal): 1 00:21:00.478 Atomic Write Unit (PFail): 1 00:21:00.478 Atomic Compare & Write Unit: 1 00:21:00.478 Fused Compare & Write: Supported 00:21:00.478 Scatter-Gather List 00:21:00.478 SGL Command Set: Supported 00:21:00.478 SGL Keyed: Supported 00:21:00.478 SGL Bit Bucket Descriptor: Not Supported 00:21:00.478 SGL Metadata Pointer: Not Supported 00:21:00.478 Oversized SGL: Not Supported 00:21:00.478 SGL Metadata Address: Not Supported 00:21:00.478 SGL Offset: Supported 00:21:00.478 Transport SGL Data Block: Not Supported 00:21:00.478 Replay Protected Memory Block: Not Supported 00:21:00.478 00:21:00.478 Firmware Slot Information 00:21:00.478 ========================= 00:21:00.478 Active slot: 0 00:21:00.478 00:21:00.478 00:21:00.478 Error Log 00:21:00.478 ========= 00:21:00.478 00:21:00.478 Active Namespaces 00:21:00.478 ================= 00:21:00.478 Discovery Log Page 00:21:00.478 ================== 00:21:00.478 Generation Counter: 2 00:21:00.478 Number of Records: 2 00:21:00.478 Record Format: 0 00:21:00.478 00:21:00.478 Discovery Log Entry 0 00:21:00.478 ---------------------- 00:21:00.478 Transport Type: 3 (TCP) 00:21:00.478 Address Family: 1 (IPv4) 00:21:00.478 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:00.478 Entry Flags: 00:21:00.478 Duplicate Returned Information: 1 00:21:00.478 Explicit Persistent Connection Support for Discovery: 1 00:21:00.478 Transport Requirements: 00:21:00.478 Secure Channel: Not Required 00:21:00.478 Port ID: 0 (0x0000) 00:21:00.478 Controller ID: 65535 (0xffff) 00:21:00.478 Admin Max SQ Size: 128 00:21:00.478 Transport Service Identifier: 4420 00:21:00.478 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:00.478 Transport Address: 10.0.0.3 00:21:00.478 Discovery Log Entry 1 00:21:00.478 ---------------------- 00:21:00.478 Transport Type: 3 (TCP) 00:21:00.478 Address Family: 1 (IPv4) 00:21:00.478 Subsystem Type: 2 (NVM Subsystem) 00:21:00.478 Entry Flags: 00:21:00.478 Duplicate Returned Information: 0 00:21:00.478 Explicit Persistent Connection Support for Discovery: 0 00:21:00.478 Transport Requirements: 00:21:00.478 Secure Channel: Not Required 00:21:00.478 Port ID: 0 (0x0000) 00:21:00.478 Controller ID: 65535 (0xffff) 00:21:00.478 Admin Max SQ Size: 128 00:21:00.478 Transport Service Identifier: 4420 00:21:00.478 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:00.478 Transport Address: 10.0.0.3 [2024-12-13 09:22:54.178121] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:00.478 [2024-12-13 09:22:54.178146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.478 [2024-12-13 09:22:54.178159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.478 [2024-12-13 09:22:54.178169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:21:00.478 [2024-12-13 09:22:54.178178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.478 [2024-12-13 09:22:54.178185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:21:00.478 [2024-12-13 09:22:54.178193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.478 [2024-12-13 09:22:54.178200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.478 [2024-12-13 09:22:54.178208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.478 [2024-12-13 09:22:54.178229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.478 [2024-12-13 09:22:54.178239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.478 [2024-12-13 09:22:54.178246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.478 [2024-12-13 09:22:54.178259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.478 [2024-12-13 09:22:54.178306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.478 [2024-12-13 09:22:54.178399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.478 [2024-12-13 09:22:54.178413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.478 [2024-12-13 09:22:54.178420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.478 [2024-12-13 09:22:54.178427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.478 [2024-12-13 09:22:54.178441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.478 [2024-12-13 09:22:54.178453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.478 [2024-12-13 09:22:54.178460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.478 [2024-12-13 09:22:54.178474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.478 [2024-12-13 09:22:54.178509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.478 [2024-12-13 09:22:54.178660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.478 [2024-12-13 09:22:54.178672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.478 [2024-12-13 09:22:54.178678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.478 [2024-12-13 09:22:54.178684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.478 [2024-12-13 09:22:54.178693] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:00.478 [2024-12-13 09:22:54.178702] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:00.478 [2024-12-13 09:22:54.178719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.478 [2024-12-13 09:22:54.178728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.478 [2024-12-13 09:22:54.178739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.478 [2024-12-13 09:22:54.178755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.478 [2024-12-13 09:22:54.178784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.478 [2024-12-13 09:22:54.178916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.478 [2024-12-13 09:22:54.178930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.478 [2024-12-13 09:22:54.178936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.178943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.479 [2024-12-13 09:22:54.178966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.178976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.178982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.479 [2024-12-13 09:22:54.178995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.479 [2024-12-13 09:22:54.179023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.479 [2024-12-13 09:22:54.179115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.479 [2024-12-13 09:22:54.179132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.479 [2024-12-13 09:22:54.179140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.479 [2024-12-13 09:22:54.179175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.479 [2024-12-13 09:22:54.179217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.479 [2024-12-13 09:22:54.179243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.479 [2024-12-13 09:22:54.179336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.479 [2024-12-13 09:22:54.179349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.479 [2024-12-13 09:22:54.179355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.479 [2024-12-13 09:22:54.179395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.479 [2024-12-13 09:22:54.179426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.479 [2024-12-13 09:22:54.179457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.479 [2024-12-13 09:22:54.179554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.479 [2024-12-13 09:22:54.179570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.479 [2024-12-13 09:22:54.179577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.479 [2024-12-13 09:22:54.179602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.479 [2024-12-13 09:22:54.179629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.479 [2024-12-13 09:22:54.179656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.479 [2024-12-13 09:22:54.179764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.479 [2024-12-13 09:22:54.179775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.479 [2024-12-13 09:22:54.179781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.479 [2024-12-13 09:22:54.179804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.479 [2024-12-13 09:22:54.179830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.479 [2024-12-13 09:22:54.179860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.479 [2024-12-13 09:22:54.179947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.479 [2024-12-13 09:22:54.179957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.479 [2024-12-13 09:22:54.179963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.179975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.479 [2024-12-13 09:22:54.179993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.479 [2024-12-13 09:22:54.180018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.479 [2024-12-13 09:22:54.180044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.479 [2024-12-13 09:22:54.180129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.479 [2024-12-13 09:22:54.180141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.479 [2024-12-13 09:22:54.180147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.479 [2024-12-13 09:22:54.180170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.479 [2024-12-13 09:22:54.180195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.479 [2024-12-13 09:22:54.180221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.479 [2024-12-13 09:22:54.180319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.479 [2024-12-13 09:22:54.180333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.479 [2024-12-13 09:22:54.180339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.479 [2024-12-13 09:22:54.180367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.479 [2024-12-13 09:22:54.180413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.479 [2024-12-13 09:22:54.180441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.479 [2024-12-13 09:22:54.180521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.479 [2024-12-13 09:22:54.180533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.479 [2024-12-13 09:22:54.180539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.479 [2024-12-13 09:22:54.180563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.479 [2024-12-13 09:22:54.180589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.479 [2024-12-13 09:22:54.180616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.479 [2024-12-13 09:22:54.180691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.479 [2024-12-13 09:22:54.180732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.479 [2024-12-13 09:22:54.180741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.479 [2024-12-13 09:22:54.180765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.479 [2024-12-13 09:22:54.180791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.479 [2024-12-13 09:22:54.180827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.479 [2024-12-13 09:22:54.180905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.479 [2024-12-13 09:22:54.180915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.479 [2024-12-13 09:22:54.180921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.479 [2024-12-13 09:22:54.180928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.480 [2024-12-13 09:22:54.180948] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.480 [2024-12-13 09:22:54.180957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.480 [2024-12-13 09:22:54.180963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.480 [2024-12-13 09:22:54.180975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.480 [2024-12-13 09:22:54.181001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.480 [2024-12-13 09:22:54.181085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.480 [2024-12-13 09:22:54.181106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.480 [2024-12-13 09:22:54.181113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.480 [2024-12-13 09:22:54.181120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.480 [2024-12-13 09:22:54.181137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.480 [2024-12-13 09:22:54.181146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.480 [2024-12-13 09:22:54.181152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.480 [2024-12-13 09:22:54.181164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.480 [2024-12-13 09:22:54.181190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.480 [2024-12-13 09:22:54.181269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.480 [2024-12-13 09:22:54.185369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.480 [2024-12-13 09:22:54.185382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.480 [2024-12-13 09:22:54.185389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.480 [2024-12-13 09:22:54.185411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.480 [2024-12-13 09:22:54.185425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.480 [2024-12-13 09:22:54.185433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.480 [2024-12-13 09:22:54.185446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.480 [2024-12-13 09:22:54.185494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.480 [2024-12-13 09:22:54.185596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.480 [2024-12-13 09:22:54.185607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.480 [2024-12-13 09:22:54.185613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.480 [2024-12-13 09:22:54.185620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.480 [2024-12-13 09:22:54.185634] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:21:00.480 00:21:00.480 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:00.480 [2024-12-13 09:22:54.298584] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:00.480 [2024-12-13 09:22:54.298724] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81383 ] 00:21:00.742 [2024-12-13 09:22:54.486239] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:00.742 [2024-12-13 09:22:54.486384] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:00.742 [2024-12-13 09:22:54.486401] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:00.742 [2024-12-13 09:22:54.486423] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:00.742 [2024-12-13 09:22:54.486437] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:00.742 [2024-12-13 09:22:54.486791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:00.742 [2024-12-13 09:22:54.486906] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:21:00.742 [2024-12-13 09:22:54.493446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:00.742 [2024-12-13 09:22:54.493497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:00.742 [2024-12-13 09:22:54.493508] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:00.742 [2024-12-13 09:22:54.493514] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:00.742 [2024-12-13 09:22:54.493589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.742 [2024-12-13 09:22:54.493604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.742 [2024-12-13 09:22:54.493611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.742 [2024-12-13 09:22:54.493634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:00.742 [2024-12-13 09:22:54.493673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.742 [2024-12-13 09:22:54.501341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.742 [2024-12-13 09:22:54.501391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.742 [2024-12-13 09:22:54.501400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.742 [2024-12-13 09:22:54.501409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.742 [2024-12-13 09:22:54.501430] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:00.742 [2024-12-13 09:22:54.501445] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:00.742 [2024-12-13 09:22:54.501456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:00.742 [2024-12-13 09:22:54.501481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.742 [2024-12-13 09:22:54.501489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.742 [2024-12-13 09:22:54.501496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.742 [2024-12-13 09:22:54.501510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.742 [2024-12-13 09:22:54.501544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.742 [2024-12-13 09:22:54.501630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.742 [2024-12-13 09:22:54.501642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.742 [2024-12-13 09:22:54.501648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.742 [2024-12-13 09:22:54.501655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.742 [2024-12-13 09:22:54.501670] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:00.742 [2024-12-13 09:22:54.501683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:00.742 [2024-12-13 09:22:54.501695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.742 [2024-12-13 09:22:54.501706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.742 [2024-12-13 09:22:54.501713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.742 [2024-12-13 09:22:54.501728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.743 [2024-12-13 09:22:54.501757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.743 [2024-12-13 09:22:54.501817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.743 [2024-12-13 09:22:54.501828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.743 [2024-12-13 09:22:54.501834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.501843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.743 [2024-12-13 09:22:54.501854] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:00.743 [2024-12-13 09:22:54.501867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:00.743 [2024-12-13 09:22:54.501879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.501899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.501905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.743 [2024-12-13 09:22:54.501918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.743 [2024-12-13 09:22:54.501947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.743 [2024-12-13 09:22:54.502006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.743 [2024-12-13 09:22:54.502018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.743 [2024-12-13 09:22:54.502023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.743 [2024-12-13 09:22:54.502040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:00.743 [2024-12-13 09:22:54.502056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.743 [2024-12-13 09:22:54.502083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.743 [2024-12-13 09:22:54.502109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.743 [2024-12-13 09:22:54.502169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.743 [2024-12-13 09:22:54.502180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.743 [2024-12-13 09:22:54.502186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.743 [2024-12-13 09:22:54.502201] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:00.743 [2024-12-13 09:22:54.502214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:00.743 [2024-12-13 09:22:54.502227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:00.743 [2024-12-13 09:22:54.502336] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:00.743 [2024-12-13 09:22:54.502347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:00.743 [2024-12-13 09:22:54.502364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.743 [2024-12-13 09:22:54.502393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.743 [2024-12-13 09:22:54.502422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.743 [2024-12-13 09:22:54.502483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.743 [2024-12-13 09:22:54.502494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.743 [2024-12-13 09:22:54.502502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.743 [2024-12-13 09:22:54.502518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:00.743 [2024-12-13 09:22:54.502538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.743 [2024-12-13 09:22:54.502565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.743 [2024-12-13 09:22:54.502591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.743 [2024-12-13 09:22:54.502650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.743 [2024-12-13 09:22:54.502661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.743 [2024-12-13 09:22:54.502667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.743 [2024-12-13 09:22:54.502681] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:00.743 [2024-12-13 09:22:54.502690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:00.743 [2024-12-13 09:22:54.502714] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:00.743 [2024-12-13 09:22:54.502731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:00.743 [2024-12-13 09:22:54.502753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.502764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.743 [2024-12-13 09:22:54.502778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.743 [2024-12-13 09:22:54.502808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.743 [2024-12-13 09:22:54.502974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.743 [2024-12-13 09:22:54.502989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.743 [2024-12-13 09:22:54.502996] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503003] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:21:00.743 [2024-12-13 09:22:54.503011] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:00.743 [2024-12-13 09:22:54.503019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503033] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503041] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.743 [2024-12-13 09:22:54.503063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.743 [2024-12-13 09:22:54.503069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.743 [2024-12-13 09:22:54.503097] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:00.743 [2024-12-13 09:22:54.503109] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:00.743 [2024-12-13 09:22:54.503120] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:00.743 [2024-12-13 09:22:54.503128] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:00.743 [2024-12-13 09:22:54.503136] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:00.743 [2024-12-13 09:22:54.503144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:00.743 [2024-12-13 09:22:54.503176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:00.743 [2024-12-13 09:22:54.503188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.743 [2024-12-13 09:22:54.503230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:00.743 [2024-12-13 09:22:54.503259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.743 [2024-12-13 09:22:54.503325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.743 [2024-12-13 09:22:54.503349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.743 [2024-12-13 09:22:54.503357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.743 [2024-12-13 09:22:54.503380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:00.743 [2024-12-13 09:22:54.503412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.743 [2024-12-13 09:22:54.503427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:21:00.743 [2024-12-13 09:22:54.503449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.743 [2024-12-13 09:22:54.503457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.743 [2024-12-13 09:22:54.503463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.503469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:21:00.744 [2024-12-13 09:22:54.503478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.744 [2024-12-13 09:22:54.503486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.503493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.503499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.744 [2024-12-13 09:22:54.503508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.744 [2024-12-13 09:22:54.503519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.503536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.503547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.503554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.744 [2024-12-13 09:22:54.503565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.744 [2024-12-13 09:22:54.503595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:00.744 [2024-12-13 09:22:54.503607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:21:00.744 [2024-12-13 09:22:54.503614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:21:00.744 [2024-12-13 09:22:54.503620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.744 [2024-12-13 09:22:54.503627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.744 [2024-12-13 09:22:54.503726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.744 [2024-12-13 09:22:54.503737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.744 [2024-12-13 09:22:54.503743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.503749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.744 [2024-12-13 09:22:54.503761] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:00.744 [2024-12-13 09:22:54.503773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.503788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.503799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.503809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.503817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.503823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.744 [2024-12-13 09:22:54.503836] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:00.744 [2024-12-13 09:22:54.503863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.744 [2024-12-13 09:22:54.503923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.744 [2024-12-13 09:22:54.503937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.744 [2024-12-13 09:22:54.503943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.503950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.744 [2024-12-13 09:22:54.504032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.504053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.504069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.744 [2024-12-13 09:22:54.504095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.744 [2024-12-13 09:22:54.504123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.744 [2024-12-13 09:22:54.504206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.744 [2024-12-13 09:22:54.504217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.744 [2024-12-13 09:22:54.504223] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504230] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:21:00.744 [2024-12-13 09:22:54.504237] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:00.744 [2024-12-13 09:22:54.504243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504259] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504266] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.744 [2024-12-13 09:22:54.504346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.744 [2024-12-13 09:22:54.504353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.744 [2024-12-13 09:22:54.504394] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:00.744 [2024-12-13 09:22:54.504429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.504455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.504473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.744 [2024-12-13 09:22:54.504501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.744 [2024-12-13 09:22:54.504532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.744 [2024-12-13 09:22:54.504645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.744 [2024-12-13 09:22:54.504657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.744 [2024-12-13 09:22:54.504664] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504671] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:21:00.744 [2024-12-13 09:22:54.504678] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:00.744 [2024-12-13 09:22:54.504685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504711] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504733] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.744 [2024-12-13 09:22:54.504753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.744 [2024-12-13 09:22:54.504761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.744 [2024-12-13 09:22:54.504800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.504824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.504840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.744 [2024-12-13 09:22:54.504861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.744 [2024-12-13 09:22:54.504889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.744 [2024-12-13 09:22:54.504971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.744 [2024-12-13 09:22:54.504982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.744 [2024-12-13 09:22:54.504987] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.504993] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:21:00.744 [2024-12-13 09:22:54.505000] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:00.744 [2024-12-13 09:22:54.505006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.505022] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.505028] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.505053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.744 [2024-12-13 09:22:54.505063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.744 [2024-12-13 09:22:54.505068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.744 [2024-12-13 09:22:54.505074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.744 [2024-12-13 09:22:54.505102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.505118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.505130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.505140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.505149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.505157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:00.744 [2024-12-13 09:22:54.505165] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:00.745 [2024-12-13 09:22:54.505176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:00.745 [2024-12-13 09:22:54.505184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:00.745 [2024-12-13 09:22:54.505221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.505231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.745 [2024-12-13 09:22:54.505244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.745 [2024-12-13 09:22:54.505255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.505262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.505268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:00.745 [2024-12-13 09:22:54.505278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.745 [2024-12-13 09:22:54.509325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.745 [2024-12-13 09:22:54.509355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:00.745 [2024-12-13 09:22:54.509390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.745 [2024-12-13 09:22:54.509416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.745 [2024-12-13 09:22:54.509425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.509433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.745 [2024-12-13 09:22:54.509445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.745 [2024-12-13 09:22:54.509454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.745 [2024-12-13 09:22:54.509460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.509466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:00.745 [2024-12-13 09:22:54.509486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.509494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:00.745 [2024-12-13 09:22:54.509508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.745 [2024-12-13 09:22:54.509542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:00.745 [2024-12-13 09:22:54.509622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.745 [2024-12-13 09:22:54.509634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.745 [2024-12-13 09:22:54.509640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.509646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:00.745 [2024-12-13 09:22:54.509680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.509689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:00.745 [2024-12-13 09:22:54.509701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.745 [2024-12-13 09:22:54.509727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:00.745 [2024-12-13 09:22:54.509826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.745 [2024-12-13 09:22:54.509841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.745 [2024-12-13 09:22:54.509847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.509853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:00.745 [2024-12-13 09:22:54.509870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.509878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:00.745 [2024-12-13 09:22:54.509894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.745 [2024-12-13 09:22:54.509923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:00.745 [2024-12-13 09:22:54.509992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.745 [2024-12-13 09:22:54.510003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.745 [2024-12-13 09:22:54.510009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:00.745 [2024-12-13 09:22:54.510046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:00.745 [2024-12-13 09:22:54.510070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.745 [2024-12-13 09:22:54.510082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:00.745 [2024-12-13 09:22:54.510100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.745 [2024-12-13 09:22:54.510111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:21:00.745 [2024-12-13 09:22:54.510133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.745 [2024-12-13 09:22:54.510151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:21:00.745 [2024-12-13 09:22:54.510170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.745 [2024-12-13 09:22:54.510199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:00.745 [2024-12-13 09:22:54.510211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:00.745 [2024-12-13 09:22:54.510218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:21:00.745 [2024-12-13 09:22:54.510225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:21:00.745 [2024-12-13 09:22:54.510441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.745 [2024-12-13 09:22:54.510456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.745 [2024-12-13 09:22:54.510463] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510470] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:21:00.745 [2024-12-13 09:22:54.510478] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:21:00.745 [2024-12-13 09:22:54.510486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510523] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510532] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.745 [2024-12-13 09:22:54.510551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.745 [2024-12-13 09:22:54.510557] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510563] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:21:00.745 [2024-12-13 09:22:54.510571] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:21:00.745 [2024-12-13 09:22:54.510577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510590] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510597] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.745 [2024-12-13 09:22:54.510614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.745 [2024-12-13 09:22:54.510619] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510625] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:21:00.745 [2024-12-13 09:22:54.510632] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:21:00.745 [2024-12-13 09:22:54.510639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510650] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510657] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:00.745 [2024-12-13 09:22:54.510676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:00.745 [2024-12-13 09:22:54.510696] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510702] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:21:00.745 [2024-12-13 09:22:54.510709] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:00.745 [2024-12-13 09:22:54.510715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510724] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510730] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.745 [2024-12-13 09:22:54.510763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.745 [2024-12-13 09:22:54.510769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.745 [2024-12-13 09:22:54.510775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:00.745 [2024-12-13 09:22:54.510805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.745 [2024-12-13 09:22:54.510838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.745 [2024-12-13 09:22:54.510845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.746 [2024-12-13 09:22:54.510852] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:00.746 [2024-12-13 09:22:54.510868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.746 [2024-12-13 09:22:54.510878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.746 [2024-12-13 09:22:54.510883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.746 [2024-12-13 09:22:54.510889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:21:00.746 [2024-12-13 09:22:54.510901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.746 [2024-12-13 09:22:54.510910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.746 [2024-12-13 09:22:54.510918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.746 [2024-12-13 09:22:54.510925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:21:00.746 ===================================================== 00:21:00.746 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.746 ===================================================== 00:21:00.746 Controller Capabilities/Features 00:21:00.746 ================================ 00:21:00.746 Vendor ID: 8086 00:21:00.746 Subsystem Vendor ID: 8086 00:21:00.746 Serial Number: SPDK00000000000001 00:21:00.746 Model Number: SPDK bdev Controller 00:21:00.746 Firmware Version: 25.01 00:21:00.746 Recommended Arb Burst: 6 00:21:00.746 IEEE OUI Identifier: e4 d2 5c 00:21:00.746 Multi-path I/O 00:21:00.746 May have multiple subsystem ports: Yes 00:21:00.746 May have multiple controllers: Yes 00:21:00.746 Associated with SR-IOV VF: No 00:21:00.746 Max Data Transfer Size: 131072 00:21:00.746 Max Number of Namespaces: 32 00:21:00.746 Max Number of I/O Queues: 127 00:21:00.746 NVMe Specification Version (VS): 1.3 00:21:00.746 NVMe Specification Version (Identify): 1.3 00:21:00.746 Maximum Queue Entries: 128 00:21:00.746 Contiguous Queues Required: Yes 00:21:00.746 Arbitration Mechanisms Supported 00:21:00.746 Weighted Round Robin: Not Supported 00:21:00.746 Vendor Specific: Not Supported 00:21:00.746 Reset Timeout: 15000 ms 00:21:00.746 Doorbell Stride: 4 bytes 00:21:00.746 NVM Subsystem Reset: Not Supported 00:21:00.746 Command Sets Supported 00:21:00.746 NVM Command Set: Supported 00:21:00.746 Boot Partition: Not Supported 00:21:00.746 Memory Page Size Minimum: 4096 bytes 00:21:00.746 Memory Page Size Maximum: 4096 bytes 00:21:00.746 Persistent Memory Region: Not Supported 00:21:00.746 Optional Asynchronous Events Supported 00:21:00.746 Namespace Attribute Notices: Supported 00:21:00.746 Firmware Activation Notices: Not Supported 00:21:00.746 ANA Change Notices: Not Supported 00:21:00.746 PLE Aggregate Log Change Notices: Not Supported 00:21:00.746 LBA Status Info Alert Notices: Not Supported 00:21:00.746 EGE Aggregate Log Change Notices: Not Supported 00:21:00.746 Normal NVM Subsystem Shutdown event: Not Supported 00:21:00.746 Zone Descriptor Change Notices: Not Supported 00:21:00.746 Discovery Log Change Notices: Not Supported 00:21:00.746 Controller Attributes 00:21:00.746 128-bit Host Identifier: Supported 00:21:00.746 Non-Operational Permissive Mode: Not Supported 00:21:00.746 NVM Sets: Not Supported 00:21:00.746 Read Recovery Levels: Not Supported 00:21:00.746 Endurance Groups: Not Supported 00:21:00.746 Predictable Latency Mode: Not Supported 00:21:00.746 Traffic Based Keep ALive: Not Supported 00:21:00.746 Namespace Granularity: Not Supported 00:21:00.746 SQ Associations: Not Supported 00:21:00.746 UUID List: Not Supported 00:21:00.746 Multi-Domain Subsystem: Not Supported 00:21:00.746 Fixed Capacity Management: Not Supported 00:21:00.746 Variable Capacity Management: Not Supported 00:21:00.746 Delete Endurance Group: Not Supported 00:21:00.746 Delete NVM Set: Not Supported 00:21:00.746 Extended LBA Formats Supported: Not Supported 00:21:00.746 Flexible Data Placement Supported: Not Supported 00:21:00.746 00:21:00.746 Controller Memory Buffer Support 00:21:00.746 ================================ 00:21:00.746 Supported: No 00:21:00.746 00:21:00.746 Persistent Memory Region Support 00:21:00.746 ================================ 00:21:00.746 Supported: No 00:21:00.746 00:21:00.746 Admin Command Set Attributes 00:21:00.746 ============================ 00:21:00.746 Security Send/Receive: Not Supported 00:21:00.746 Format NVM: Not Supported 00:21:00.746 Firmware Activate/Download: Not Supported 00:21:00.746 Namespace Management: Not Supported 00:21:00.746 Device Self-Test: Not Supported 00:21:00.746 Directives: Not Supported 00:21:00.746 NVMe-MI: Not Supported 00:21:00.746 Virtualization Management: Not Supported 00:21:00.746 Doorbell Buffer Config: Not Supported 00:21:00.746 Get LBA Status Capability: Not Supported 00:21:00.746 Command & Feature Lockdown Capability: Not Supported 00:21:00.746 Abort Command Limit: 4 00:21:00.746 Async Event Request Limit: 4 00:21:00.746 Number of Firmware Slots: N/A 00:21:00.746 Firmware Slot 1 Read-Only: N/A 00:21:00.746 Firmware Activation Without Reset: N/A 00:21:00.746 Multiple Update Detection Support: N/A 00:21:00.746 Firmware Update Granularity: No Information Provided 00:21:00.746 Per-Namespace SMART Log: No 00:21:00.746 Asymmetric Namespace Access Log Page: Not Supported 00:21:00.746 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:00.746 Command Effects Log Page: Supported 00:21:00.746 Get Log Page Extended Data: Supported 00:21:00.746 Telemetry Log Pages: Not Supported 00:21:00.746 Persistent Event Log Pages: Not Supported 00:21:00.746 Supported Log Pages Log Page: May Support 00:21:00.746 Commands Supported & Effects Log Page: Not Supported 00:21:00.746 Feature Identifiers & Effects Log Page:May Support 00:21:00.746 NVMe-MI Commands & Effects Log Page: May Support 00:21:00.746 Data Area 4 for Telemetry Log: Not Supported 00:21:00.746 Error Log Page Entries Supported: 128 00:21:00.746 Keep Alive: Supported 00:21:00.746 Keep Alive Granularity: 10000 ms 00:21:00.746 00:21:00.746 NVM Command Set Attributes 00:21:00.746 ========================== 00:21:00.746 Submission Queue Entry Size 00:21:00.746 Max: 64 00:21:00.746 Min: 64 00:21:00.746 Completion Queue Entry Size 00:21:00.746 Max: 16 00:21:00.746 Min: 16 00:21:00.746 Number of Namespaces: 32 00:21:00.746 Compare Command: Supported 00:21:00.746 Write Uncorrectable Command: Not Supported 00:21:00.746 Dataset Management Command: Supported 00:21:00.746 Write Zeroes Command: Supported 00:21:00.746 Set Features Save Field: Not Supported 00:21:00.746 Reservations: Supported 00:21:00.746 Timestamp: Not Supported 00:21:00.746 Copy: Supported 00:21:00.746 Volatile Write Cache: Present 00:21:00.746 Atomic Write Unit (Normal): 1 00:21:00.746 Atomic Write Unit (PFail): 1 00:21:00.746 Atomic Compare & Write Unit: 1 00:21:00.746 Fused Compare & Write: Supported 00:21:00.746 Scatter-Gather List 00:21:00.746 SGL Command Set: Supported 00:21:00.746 SGL Keyed: Supported 00:21:00.746 SGL Bit Bucket Descriptor: Not Supported 00:21:00.746 SGL Metadata Pointer: Not Supported 00:21:00.746 Oversized SGL: Not Supported 00:21:00.746 SGL Metadata Address: Not Supported 00:21:00.746 SGL Offset: Supported 00:21:00.746 Transport SGL Data Block: Not Supported 00:21:00.746 Replay Protected Memory Block: Not Supported 00:21:00.746 00:21:00.746 Firmware Slot Information 00:21:00.746 ========================= 00:21:00.746 Active slot: 1 00:21:00.746 Slot 1 Firmware Revision: 25.01 00:21:00.746 00:21:00.746 00:21:00.746 Commands Supported and Effects 00:21:00.746 ============================== 00:21:00.746 Admin Commands 00:21:00.746 -------------- 00:21:00.746 Get Log Page (02h): Supported 00:21:00.746 Identify (06h): Supported 00:21:00.746 Abort (08h): Supported 00:21:00.746 Set Features (09h): Supported 00:21:00.746 Get Features (0Ah): Supported 00:21:00.746 Asynchronous Event Request (0Ch): Supported 00:21:00.746 Keep Alive (18h): Supported 00:21:00.746 I/O Commands 00:21:00.746 ------------ 00:21:00.746 Flush (00h): Supported LBA-Change 00:21:00.746 Write (01h): Supported LBA-Change 00:21:00.746 Read (02h): Supported 00:21:00.746 Compare (05h): Supported 00:21:00.746 Write Zeroes (08h): Supported LBA-Change 00:21:00.746 Dataset Management (09h): Supported LBA-Change 00:21:00.746 Copy (19h): Supported LBA-Change 00:21:00.746 00:21:00.746 Error Log 00:21:00.746 ========= 00:21:00.746 00:21:00.746 Arbitration 00:21:00.746 =========== 00:21:00.746 Arbitration Burst: 1 00:21:00.746 00:21:00.746 Power Management 00:21:00.746 ================ 00:21:00.746 Number of Power States: 1 00:21:00.746 Current Power State: Power State #0 00:21:00.746 Power State #0: 00:21:00.746 Max Power: 0.00 W 00:21:00.746 Non-Operational State: Operational 00:21:00.746 Entry Latency: Not Reported 00:21:00.746 Exit Latency: Not Reported 00:21:00.746 Relative Read Throughput: 0 00:21:00.746 Relative Read Latency: 0 00:21:00.746 Relative Write Throughput: 0 00:21:00.746 Relative Write Latency: 0 00:21:00.747 Idle Power: Not Reported 00:21:00.747 Active Power: Not Reported 00:21:00.747 Non-Operational Permissive Mode: Not Supported 00:21:00.747 00:21:00.747 Health Information 00:21:00.747 ================== 00:21:00.747 Critical Warnings: 00:21:00.747 Available Spare Space: OK 00:21:00.747 Temperature: OK 00:21:00.747 Device Reliability: OK 00:21:00.747 Read Only: No 00:21:00.747 Volatile Memory Backup: OK 00:21:00.747 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:00.747 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:00.747 Available Spare: 0% 00:21:00.747 Available Spare Threshold: 0% 00:21:00.747 Life Percentage Used:[2024-12-13 09:22:54.511129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.511143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:21:00.747 [2024-12-13 09:22:54.511158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.747 [2024-12-13 09:22:54.511194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:21:00.747 [2024-12-13 09:22:54.511283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.747 [2024-12-13 09:22:54.511297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.747 [2024-12-13 09:22:54.511318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.511329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.511441] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:00.747 [2024-12-13 09:22:54.511472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.511486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.747 [2024-12-13 09:22:54.511496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.511511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.747 [2024-12-13 09:22:54.511519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.511528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.747 [2024-12-13 09:22:54.511535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.511543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.747 [2024-12-13 09:22:54.511557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.511566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.511572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.747 [2024-12-13 09:22:54.511586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.747 [2024-12-13 09:22:54.511636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.747 [2024-12-13 09:22:54.511707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.747 [2024-12-13 09:22:54.511722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.747 [2024-12-13 09:22:54.511732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.511739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.511752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.511760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.511767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.747 [2024-12-13 09:22:54.511779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.747 [2024-12-13 09:22:54.511815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.747 [2024-12-13 09:22:54.511941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.747 [2024-12-13 09:22:54.511967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.747 [2024-12-13 09:22:54.511975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.511981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.511991] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:00.747 [2024-12-13 09:22:54.511999] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:00.747 [2024-12-13 09:22:54.512019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.747 [2024-12-13 09:22:54.512046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.747 [2024-12-13 09:22:54.512073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.747 [2024-12-13 09:22:54.512133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.747 [2024-12-13 09:22:54.512143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.747 [2024-12-13 09:22:54.512149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.512172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.747 [2024-12-13 09:22:54.512197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.747 [2024-12-13 09:22:54.512222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.747 [2024-12-13 09:22:54.512310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.747 [2024-12-13 09:22:54.512339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.747 [2024-12-13 09:22:54.512345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.512370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.747 [2024-12-13 09:22:54.512400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.747 [2024-12-13 09:22:54.512429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.747 [2024-12-13 09:22:54.512488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.747 [2024-12-13 09:22:54.512499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.747 [2024-12-13 09:22:54.512505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.512528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.747 [2024-12-13 09:22:54.512560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.747 [2024-12-13 09:22:54.512592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.747 [2024-12-13 09:22:54.512658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.747 [2024-12-13 09:22:54.512684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.747 [2024-12-13 09:22:54.512694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.747 [2024-12-13 09:22:54.512731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.747 [2024-12-13 09:22:54.512744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.747 [2024-12-13 09:22:54.512757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.747 [2024-12-13 09:22:54.512782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.747 [2024-12-13 09:22:54.512844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.747 [2024-12-13 09:22:54.512855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.747 [2024-12-13 09:22:54.512860] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.512867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.748 [2024-12-13 09:22:54.512882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.512890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.512896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.748 [2024-12-13 09:22:54.512907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.748 [2024-12-13 09:22:54.512932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.748 [2024-12-13 09:22:54.512986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.748 [2024-12-13 09:22:54.512996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.748 [2024-12-13 09:22:54.513002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.513008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.748 [2024-12-13 09:22:54.513024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.513031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.513037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.748 [2024-12-13 09:22:54.513048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.748 [2024-12-13 09:22:54.513073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.748 [2024-12-13 09:22:54.513138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.748 [2024-12-13 09:22:54.513149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.748 [2024-12-13 09:22:54.513155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.513161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.748 [2024-12-13 09:22:54.513177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.513184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.513195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.748 [2024-12-13 09:22:54.513207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.748 [2024-12-13 09:22:54.513232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.748 [2024-12-13 09:22:54.513309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.748 [2024-12-13 09:22:54.513323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.748 [2024-12-13 09:22:54.513329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.517353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.748 [2024-12-13 09:22:54.517403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.517414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.517420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:00.748 [2024-12-13 09:22:54.517438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.748 [2024-12-13 09:22:54.517470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:00.748 [2024-12-13 09:22:54.517553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:00.748 [2024-12-13 09:22:54.517565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:00.748 [2024-12-13 09:22:54.517571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:00.748 [2024-12-13 09:22:54.517578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:00.748 [2024-12-13 09:22:54.517592] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:21:00.748 0% 00:21:00.748 Data Units Read: 0 00:21:00.748 Data Units Written: 0 00:21:00.748 Host Read Commands: 0 00:21:00.748 Host Write Commands: 0 00:21:00.748 Controller Busy Time: 0 minutes 00:21:00.748 Power Cycles: 0 00:21:00.748 Power On Hours: 0 hours 00:21:00.748 Unsafe Shutdowns: 0 00:21:00.748 Unrecoverable Media Errors: 0 00:21:00.748 Lifetime Error Log Entries: 0 00:21:00.748 Warning Temperature Time: 0 minutes 00:21:00.748 Critical Temperature Time: 0 minutes 00:21:00.748 00:21:00.748 Number of Queues 00:21:00.748 ================ 00:21:00.748 Number of I/O Submission Queues: 127 00:21:00.748 Number of I/O Completion Queues: 127 00:21:00.748 00:21:00.748 Active Namespaces 00:21:00.748 ================= 00:21:00.748 Namespace ID:1 00:21:00.748 Error Recovery Timeout: Unlimited 00:21:00.748 Command Set Identifier: NVM (00h) 00:21:00.748 Deallocate: Supported 00:21:00.748 Deallocated/Unwritten Error: Not Supported 00:21:00.748 Deallocated Read Value: Unknown 00:21:00.748 Deallocate in Write Zeroes: Not Supported 00:21:00.748 Deallocated Guard Field: 0xFFFF 00:21:00.748 Flush: Supported 00:21:00.748 Reservation: Supported 00:21:00.748 Namespace Sharing Capabilities: Multiple Controllers 00:21:00.748 Size (in LBAs): 131072 (0GiB) 00:21:00.748 Capacity (in LBAs): 131072 (0GiB) 00:21:00.748 Utilization (in LBAs): 131072 (0GiB) 00:21:00.748 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:00.748 EUI64: ABCDEF0123456789 00:21:00.748 UUID: 1342e938-15fa-4bcf-b1d5-b132ade61504 00:21:00.748 Thin Provisioning: Not Supported 00:21:00.748 Per-NS Atomic Units: Yes 00:21:00.748 Atomic Boundary Size (Normal): 0 00:21:00.748 Atomic Boundary Size (PFail): 0 00:21:00.748 Atomic Boundary Offset: 0 00:21:00.748 Maximum Single Source Range Length: 65535 00:21:00.748 Maximum Copy Length: 65535 00:21:00.748 Maximum Source Range Count: 1 00:21:00.748 NGUID/EUI64 Never Reused: No 00:21:00.748 Namespace Write Protected: No 00:21:00.748 Number of LBA Formats: 1 00:21:00.748 Current LBA Format: LBA Format #00 00:21:00.748 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:00.748 00:21:00.748 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:00.748 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.748 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.748 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.008 rmmod nvme_tcp 00:21:01.008 rmmod nvme_fabrics 00:21:01.008 rmmod nvme_keyring 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 81344 ']' 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 81344 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 81344 ']' 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 81344 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81344 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.008 killing process with pid 81344 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81344' 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 81344 00:21:01.008 09:22:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 81344 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:01.944 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:21:02.203 00:21:02.203 real 0m3.924s 00:21:02.203 user 0m10.427s 00:21:02.203 sys 0m0.925s 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.203 ************************************ 00:21:02.203 END TEST nvmf_identify 00:21:02.203 ************************************ 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.203 09:22:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.203 ************************************ 00:21:02.203 START TEST nvmf_perf 00:21:02.203 ************************************ 00:21:02.203 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:02.203 * Looking for test storage... 00:21:02.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:02.203 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:02.203 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:02.203 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:02.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.463 --rc genhtml_branch_coverage=1 00:21:02.463 --rc genhtml_function_coverage=1 00:21:02.463 --rc genhtml_legend=1 00:21:02.463 --rc geninfo_all_blocks=1 00:21:02.463 --rc geninfo_unexecuted_blocks=1 00:21:02.463 00:21:02.463 ' 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:02.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.463 --rc genhtml_branch_coverage=1 00:21:02.463 --rc genhtml_function_coverage=1 00:21:02.463 --rc genhtml_legend=1 00:21:02.463 --rc geninfo_all_blocks=1 00:21:02.463 --rc geninfo_unexecuted_blocks=1 00:21:02.463 00:21:02.463 ' 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:02.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.463 --rc genhtml_branch_coverage=1 00:21:02.463 --rc genhtml_function_coverage=1 00:21:02.463 --rc genhtml_legend=1 00:21:02.463 --rc geninfo_all_blocks=1 00:21:02.463 --rc geninfo_unexecuted_blocks=1 00:21:02.463 00:21:02.463 ' 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:02.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.463 --rc genhtml_branch_coverage=1 00:21:02.463 --rc genhtml_function_coverage=1 00:21:02.463 --rc genhtml_legend=1 00:21:02.463 --rc geninfo_all_blocks=1 00:21:02.463 --rc geninfo_unexecuted_blocks=1 00:21:02.463 00:21:02.463 ' 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:02.463 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.464 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:02.464 Cannot find device "nvmf_init_br" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:02.464 Cannot find device "nvmf_init_br2" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:02.464 Cannot find device "nvmf_tgt_br" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:02.464 Cannot find device "nvmf_tgt_br2" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:02.464 Cannot find device "nvmf_init_br" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:02.464 Cannot find device "nvmf_init_br2" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:02.464 Cannot find device "nvmf_tgt_br" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:02.464 Cannot find device "nvmf_tgt_br2" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:02.464 Cannot find device "nvmf_br" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:02.464 Cannot find device "nvmf_init_if" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:02.464 Cannot find device "nvmf_init_if2" 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:02.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:02.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:21:02.464 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:02.724 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:02.724 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:21:02.724 00:21:02.724 --- 10.0.0.3 ping statistics --- 00:21:02.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.724 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:02.724 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:02.724 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:21:02.724 00:21:02.724 --- 10.0.0.4 ping statistics --- 00:21:02.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.724 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:02.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:21:02.724 00:21:02.724 --- 10.0.0.1 ping statistics --- 00:21:02.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.724 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:02.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:21:02.724 00:21:02.724 --- 10.0.0.2 ping statistics --- 00:21:02.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.724 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:02.724 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:02.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=81612 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 81612 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 81612 ']' 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.983 09:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:02.983 [2024-12-13 09:22:56.753393] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:02.983 [2024-12-13 09:22:56.753797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.243 [2024-12-13 09:22:56.929732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.243 [2024-12-13 09:22:57.012204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.243 [2024-12-13 09:22:57.012265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.243 [2024-12-13 09:22:57.012318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.243 [2024-12-13 09:22:57.012331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.243 [2024-12-13 09:22:57.012342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.243 [2024-12-13 09:22:57.013912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.243 [2024-12-13 09:22:57.014045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.243 [2024-12-13 09:22:57.015085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.243 [2024-12-13 09:22:57.015100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.502 [2024-12-13 09:22:57.173186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:03.760 09:22:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.760 09:22:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:03.760 09:22:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.760 09:22:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.760 09:22:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:04.019 09:22:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.019 09:22:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:04.019 09:22:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:21:04.279 09:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:04.279 09:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:21:04.538 09:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:21:04.538 09:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:04.797 09:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:04.797 09:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:21:04.797 09:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:04.797 09:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:04.797 09:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:05.056 [2024-12-13 09:22:58.851893] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.056 09:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.315 09:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:05.315 09:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.574 09:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:05.574 09:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:05.833 09:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:06.092 [2024-12-13 09:22:59.793966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:06.092 09:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:06.351 09:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:06.351 09:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:06.351 09:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:06.351 09:23:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:07.727 Initializing NVMe Controllers 00:21:07.727 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:07.727 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:07.727 Initialization complete. Launching workers. 00:21:07.727 ======================================================== 00:21:07.727 Latency(us) 00:21:07.727 Device Information : IOPS MiB/s Average min max 00:21:07.727 PCIE (0000:00:10.0) NSID 1 from core 0: 20859.31 81.48 1534.28 383.27 8272.08 00:21:07.727 ======================================================== 00:21:07.727 Total : 20859.31 81.48 1534.28 383.27 8272.08 00:21:07.727 00:21:07.727 09:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:09.105 Initializing NVMe Controllers 00:21:09.105 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.105 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:09.105 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:09.105 Initialization complete. Launching workers. 00:21:09.105 ======================================================== 00:21:09.105 Latency(us) 00:21:09.105 Device Information : IOPS MiB/s Average min max 00:21:09.105 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3058.69 11.95 325.14 125.29 4658.62 00:21:09.105 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 127.49 0.50 7905.31 4971.86 12012.25 00:21:09.105 ======================================================== 00:21:09.105 Total : 3186.17 12.45 628.45 125.29 12012.25 00:21:09.105 00:21:09.105 09:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:10.483 Initializing NVMe Controllers 00:21:10.483 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:10.483 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:10.483 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:10.483 Initialization complete. Launching workers. 00:21:10.483 ======================================================== 00:21:10.483 Latency(us) 00:21:10.483 Device Information : IOPS MiB/s Average min max 00:21:10.483 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8026.00 31.35 3991.02 575.19 11272.76 00:21:10.483 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3688.00 14.41 8725.05 4090.93 16072.31 00:21:10.483 ======================================================== 00:21:10.483 Total : 11714.00 45.76 5481.47 575.19 16072.31 00:21:10.483 00:21:10.483 09:23:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:21:10.483 09:23:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:13.017 Initializing NVMe Controllers 00:21:13.017 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.017 Controller IO queue size 128, less than required. 00:21:13.017 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.017 Controller IO queue size 128, less than required. 00:21:13.017 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.017 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:13.017 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:13.017 Initialization complete. Launching workers. 00:21:13.017 ======================================================== 00:21:13.017 Latency(us) 00:21:13.017 Device Information : IOPS MiB/s Average min max 00:21:13.017 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1550.84 387.71 84858.29 41757.63 214831.64 00:21:13.017 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 623.44 155.86 213503.33 84228.72 401967.27 00:21:13.017 ======================================================== 00:21:13.017 Total : 2174.28 543.57 121745.01 41757.63 401967.27 00:21:13.017 00:21:13.275 09:23:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:21:13.534 Initializing NVMe Controllers 00:21:13.534 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.534 Controller IO queue size 128, less than required. 00:21:13.534 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.534 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:13.534 Controller IO queue size 128, less than required. 00:21:13.534 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:13.534 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:21:13.534 WARNING: Some requested NVMe devices were skipped 00:21:13.534 No valid NVMe controllers or AIO or URING devices found 00:21:13.534 09:23:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:21:16.823 Initializing NVMe Controllers 00:21:16.823 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:16.823 Controller IO queue size 128, less than required. 00:21:16.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:16.823 Controller IO queue size 128, less than required. 00:21:16.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:16.823 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:16.823 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:16.823 Initialization complete. Launching workers. 00:21:16.823 00:21:16.823 ==================== 00:21:16.823 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:16.823 TCP transport: 00:21:16.823 polls: 7209 00:21:16.823 idle_polls: 3801 00:21:16.823 sock_completions: 3408 00:21:16.823 nvme_completions: 5895 00:21:16.823 submitted_requests: 8862 00:21:16.823 queued_requests: 1 00:21:16.823 00:21:16.823 ==================== 00:21:16.823 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:16.823 TCP transport: 00:21:16.823 polls: 7971 00:21:16.823 idle_polls: 4011 00:21:16.823 sock_completions: 3960 00:21:16.823 nvme_completions: 6109 00:21:16.823 submitted_requests: 9160 00:21:16.823 queued_requests: 1 00:21:16.823 ======================================================== 00:21:16.823 Latency(us) 00:21:16.823 Device Information : IOPS MiB/s Average min max 00:21:16.823 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1473.19 368.30 89743.06 46039.47 244894.79 00:21:16.823 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1526.68 381.67 87408.48 43306.46 363720.88 00:21:16.823 ======================================================== 00:21:16.823 Total : 2999.87 749.97 88554.96 43306.46 363720.88 00:21:16.823 00:21:16.823 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:16.823 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.823 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:21:16.823 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:21:16.824 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:21:17.082 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=dc43ee83-12b4-49e1-bade-6ced52cca900 00:21:17.082 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb dc43ee83-12b4-49e1-bade-6ced52cca900 00:21:17.082 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=dc43ee83-12b4-49e1-bade-6ced52cca900 00:21:17.082 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:17.082 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:21:17.082 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:21:17.082 09:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:17.340 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:17.340 { 00:21:17.340 "uuid": "dc43ee83-12b4-49e1-bade-6ced52cca900", 00:21:17.340 "name": "lvs_0", 00:21:17.340 "base_bdev": "Nvme0n1", 00:21:17.340 "total_data_clusters": 1278, 00:21:17.340 "free_clusters": 1278, 00:21:17.340 "block_size": 4096, 00:21:17.340 "cluster_size": 4194304 00:21:17.340 } 00:21:17.340 ]' 00:21:17.340 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="dc43ee83-12b4-49e1-bade-6ced52cca900") .free_clusters' 00:21:17.340 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:21:17.340 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="dc43ee83-12b4-49e1-bade-6ced52cca900") .cluster_size' 00:21:17.340 5112 00:21:17.340 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:21:17.340 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:21:17.340 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:21:17.340 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:21:17.340 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dc43ee83-12b4-49e1-bade-6ced52cca900 lbd_0 5112 00:21:17.599 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=995086b1-9d5e-4c55-9cc5-c11ba51c2e68 00:21:17.599 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 995086b1-9d5e-4c55-9cc5-c11ba51c2e68 lvs_n_0 00:21:18.167 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=c579b4db-a07f-4f3e-975b-34001b9bff1c 00:21:18.167 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb c579b4db-a07f-4f3e-975b-34001b9bff1c 00:21:18.167 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=c579b4db-a07f-4f3e-975b-34001b9bff1c 00:21:18.167 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:18.167 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:21:18.167 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:21:18.167 09:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:18.167 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:18.167 { 00:21:18.167 "uuid": "dc43ee83-12b4-49e1-bade-6ced52cca900", 00:21:18.167 "name": "lvs_0", 00:21:18.167 "base_bdev": "Nvme0n1", 00:21:18.167 "total_data_clusters": 1278, 00:21:18.167 "free_clusters": 0, 00:21:18.167 "block_size": 4096, 00:21:18.167 "cluster_size": 4194304 00:21:18.167 }, 00:21:18.167 { 00:21:18.167 "uuid": "c579b4db-a07f-4f3e-975b-34001b9bff1c", 00:21:18.167 "name": "lvs_n_0", 00:21:18.167 "base_bdev": "995086b1-9d5e-4c55-9cc5-c11ba51c2e68", 00:21:18.167 "total_data_clusters": 1276, 00:21:18.167 "free_clusters": 1276, 00:21:18.167 "block_size": 4096, 00:21:18.167 "cluster_size": 4194304 00:21:18.167 } 00:21:18.167 ]' 00:21:18.167 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c579b4db-a07f-4f3e-975b-34001b9bff1c") .free_clusters' 00:21:18.426 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:21:18.426 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c579b4db-a07f-4f3e-975b-34001b9bff1c") .cluster_size' 00:21:18.426 5104 00:21:18.426 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:21:18.426 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:21:18.426 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:21:18.426 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:21:18.426 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c579b4db-a07f-4f3e-975b-34001b9bff1c lbd_nest_0 5104 00:21:18.685 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=dabf54b8-8bd5-4979-9c0d-2f7c7e786b1f 00:21:18.685 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.944 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:21:18.944 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 dabf54b8-8bd5-4979-9c0d-2f7c7e786b1f 00:21:19.203 09:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:19.462 09:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:19.463 09:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:19.463 09:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:19.463 09:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:19.463 09:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:19.722 Initializing NVMe Controllers 00:21:19.722 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.722 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:19.722 WARNING: Some requested NVMe devices were skipped 00:21:19.722 No valid NVMe controllers or AIO or URING devices found 00:21:19.722 09:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:19.722 09:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:31.944 Initializing NVMe Controllers 00:21:31.944 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.944 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:31.944 Initialization complete. Launching workers. 00:21:31.944 ======================================================== 00:21:31.944 Latency(us) 00:21:31.944 Device Information : IOPS MiB/s Average min max 00:21:31.944 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 853.50 106.69 1170.61 387.80 8281.42 00:21:31.944 ======================================================== 00:21:31.944 Total : 853.50 106.69 1170.61 387.80 8281.42 00:21:31.944 00:21:31.944 09:23:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:31.944 09:23:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:31.944 09:23:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:31.944 Initializing NVMe Controllers 00:21:31.944 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.944 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:31.944 WARNING: Some requested NVMe devices were skipped 00:21:31.944 No valid NVMe controllers or AIO or URING devices found 00:21:31.944 09:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:31.944 09:23:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:41.968 Initializing NVMe Controllers 00:21:41.968 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.968 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:41.968 Initialization complete. Launching workers. 00:21:41.968 ======================================================== 00:21:41.968 Latency(us) 00:21:41.968 Device Information : IOPS MiB/s Average min max 00:21:41.968 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1345.67 168.21 23803.89 6231.80 71536.73 00:21:41.968 ======================================================== 00:21:41.968 Total : 1345.67 168.21 23803.89 6231.80 71536.73 00:21:41.968 00:21:41.968 09:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:41.968 09:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:41.968 09:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:41.968 Initializing NVMe Controllers 00:21:41.968 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.968 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:41.968 WARNING: Some requested NVMe devices were skipped 00:21:41.969 No valid NVMe controllers or AIO or URING devices found 00:21:41.969 09:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:41.969 09:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:51.947 Initializing NVMe Controllers 00:21:51.947 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:51.947 Controller IO queue size 128, less than required. 00:21:51.947 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:51.947 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:51.947 Initialization complete. Launching workers. 00:21:51.947 ======================================================== 00:21:51.947 Latency(us) 00:21:51.947 Device Information : IOPS MiB/s Average min max 00:21:51.947 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3629.94 453.74 35291.98 15094.35 79531.53 00:21:51.947 ======================================================== 00:21:51.947 Total : 3629.94 453.74 35291.98 15094.35 79531.53 00:21:51.947 00:21:51.947 09:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.205 09:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete dabf54b8-8bd5-4979-9c0d-2f7c7e786b1f 00:21:52.464 09:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:52.723 09:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 995086b1-9d5e-4c55-9cc5-c11ba51c2e68 00:21:52.981 09:23:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.240 rmmod nvme_tcp 00:21:53.240 rmmod nvme_fabrics 00:21:53.240 rmmod nvme_keyring 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 81612 ']' 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 81612 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 81612 ']' 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 81612 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:53.240 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.241 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81612 00:21:53.241 killing process with pid 81612 00:21:53.241 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.241 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.241 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81612' 00:21:53.241 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 81612 00:21:53.241 09:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 81612 00:21:55.773 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:55.773 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:55.773 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:55.773 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:55.773 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:55.773 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:55.773 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:55.773 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:21:55.774 ************************************ 00:21:55.774 END TEST nvmf_perf 00:21:55.774 ************************************ 00:21:55.774 00:21:55.774 real 0m53.491s 00:21:55.774 user 3m21.245s 00:21:55.774 sys 0m11.928s 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.774 ************************************ 00:21:55.774 START TEST nvmf_fio_host 00:21:55.774 ************************************ 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:55.774 * Looking for test storage... 00:21:55.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:55.774 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:56.033 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:56.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.034 --rc genhtml_branch_coverage=1 00:21:56.034 --rc genhtml_function_coverage=1 00:21:56.034 --rc genhtml_legend=1 00:21:56.034 --rc geninfo_all_blocks=1 00:21:56.034 --rc geninfo_unexecuted_blocks=1 00:21:56.034 00:21:56.034 ' 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:56.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.034 --rc genhtml_branch_coverage=1 00:21:56.034 --rc genhtml_function_coverage=1 00:21:56.034 --rc genhtml_legend=1 00:21:56.034 --rc geninfo_all_blocks=1 00:21:56.034 --rc geninfo_unexecuted_blocks=1 00:21:56.034 00:21:56.034 ' 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:56.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.034 --rc genhtml_branch_coverage=1 00:21:56.034 --rc genhtml_function_coverage=1 00:21:56.034 --rc genhtml_legend=1 00:21:56.034 --rc geninfo_all_blocks=1 00:21:56.034 --rc geninfo_unexecuted_blocks=1 00:21:56.034 00:21:56.034 ' 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:56.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.034 --rc genhtml_branch_coverage=1 00:21:56.034 --rc genhtml_function_coverage=1 00:21:56.034 --rc genhtml_legend=1 00:21:56.034 --rc geninfo_all_blocks=1 00:21:56.034 --rc geninfo_unexecuted_blocks=1 00:21:56.034 00:21:56.034 ' 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.034 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:56.035 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:56.035 Cannot find device "nvmf_init_br" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:56.035 Cannot find device "nvmf_init_br2" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:56.035 Cannot find device "nvmf_tgt_br" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:56.035 Cannot find device "nvmf_tgt_br2" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:56.035 Cannot find device "nvmf_init_br" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:56.035 Cannot find device "nvmf_init_br2" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:56.035 Cannot find device "nvmf_tgt_br" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:56.035 Cannot find device "nvmf_tgt_br2" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:56.035 Cannot find device "nvmf_br" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:56.035 Cannot find device "nvmf_init_if" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:56.035 Cannot find device "nvmf_init_if2" 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:56.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:56.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:56.035 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:56.294 09:23:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:56.294 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:56.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:21:56.294 00:21:56.294 --- 10.0.0.3 ping statistics --- 00:21:56.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.294 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:56.294 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:56.294 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:56.295 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:21:56.295 00:21:56.295 --- 10.0.0.4 ping statistics --- 00:21:56.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.295 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:56.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:56.295 00:21:56.295 --- 10.0.0.1 ping statistics --- 00:21:56.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.295 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:56.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:21:56.295 00:21:56.295 --- 10.0.0.2 ping statistics --- 00:21:56.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.295 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=82509 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 82509 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 82509 ']' 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.295 09:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.553 [2024-12-13 09:23:50.253252] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:56.553 [2024-12-13 09:23:50.253441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.553 [2024-12-13 09:23:50.435215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.812 [2024-12-13 09:23:50.520096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.812 [2024-12-13 09:23:50.520151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.812 [2024-12-13 09:23:50.520168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.812 [2024-12-13 09:23:50.520178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.812 [2024-12-13 09:23:50.520189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.812 [2024-12-13 09:23:50.522075] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.812 [2024-12-13 09:23:50.522223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.812 [2024-12-13 09:23:50.522897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.812 [2024-12-13 09:23:50.522915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.812 [2024-12-13 09:23:50.686082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:57.380 09:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.380 09:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:57.380 09:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:57.639 [2024-12-13 09:23:51.511962] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.898 09:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:57.898 09:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:57.898 09:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.898 09:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:58.157 Malloc1 00:21:58.157 09:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.416 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:58.675 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:58.934 [2024-12-13 09:23:52.623378] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:58.934 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:59.193 09:23:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:59.452 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:59.453 fio-3.35 00:21:59.453 Starting 1 thread 00:22:01.988 00:22:01.988 test: (groupid=0, jobs=1): err= 0: pid=82579: Fri Dec 13 09:23:55 2024 00:22:01.988 read: IOPS=7676, BW=30.0MiB/s (31.4MB/s)(60.2MiB/2008msec) 00:22:01.988 slat (usec): min=2, max=200, avg= 3.02, stdev= 2.71 00:22:01.988 clat (usec): min=1922, max=15169, avg=8642.83, stdev=700.39 00:22:01.988 lat (usec): min=1963, max=15171, avg=8645.86, stdev=700.28 00:22:01.988 clat percentiles (usec): 00:22:01.988 | 1.00th=[ 7308], 5.00th=[ 7701], 10.00th=[ 7898], 20.00th=[ 8094], 00:22:01.988 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:22:01.988 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9503], 95.00th=[ 9765], 00:22:01.988 | 99.00th=[10552], 99.50th=[10814], 99.90th=[13304], 99.95th=[14615], 00:22:01.988 | 99.99th=[15008] 00:22:01.988 bw ( KiB/s): min=28750, max=32144, per=99.97%, avg=30697.50, stdev=1451.92, samples=4 00:22:01.988 iops : min= 7187, max= 8036, avg=7674.25, stdev=363.20, samples=4 00:22:01.988 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(60.1MiB/2008msec); 0 zone resets 00:22:01.988 slat (usec): min=2, max=151, avg= 3.12, stdev= 2.10 00:22:01.988 clat (usec): min=1703, max=14780, avg=7923.92, stdev=657.59 00:22:01.988 lat (usec): min=1714, max=14783, avg=7927.05, stdev=657.60 00:22:01.988 clat percentiles (usec): 00:22:01.988 | 1.00th=[ 6718], 5.00th=[ 7111], 10.00th=[ 7242], 20.00th=[ 7439], 00:22:01.988 | 30.00th=[ 7635], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:22:01.988 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 8979], 00:22:01.988 | 99.00th=[ 9634], 99.50th=[10028], 99.90th=[12780], 99.95th=[13960], 00:22:01.988 | 99.99th=[14353] 00:22:01.988 bw ( KiB/s): min=29748, max=31456, per=99.90%, avg=30627.00, stdev=906.21, samples=4 00:22:01.988 iops : min= 7437, max= 7864, avg=7656.75, stdev=226.55, samples=4 00:22:01.988 lat (msec) : 2=0.01%, 4=0.12%, 10=98.00%, 20=1.87% 00:22:01.988 cpu : usr=70.40%, sys=21.82%, ctx=3, majf=0, minf=1554 00:22:01.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:01.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:01.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:01.988 issued rwts: total=15414,15390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:01.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:01.988 00:22:01.988 Run status group 0 (all jobs): 00:22:01.988 READ: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=60.2MiB (63.1MB), run=2008-2008msec 00:22:01.988 WRITE: bw=29.9MiB/s (31.4MB/s), 29.9MiB/s-29.9MiB/s (31.4MB/s-31.4MB/s), io=60.1MiB (63.0MB), run=2008-2008msec 00:22:01.988 ----------------------------------------------------- 00:22:01.988 Suppressions used: 00:22:01.988 count bytes template 00:22:01.988 1 57 /usr/src/fio/parse.c 00:22:01.988 1 8 libtcmalloc_minimal.so 00:22:01.988 ----------------------------------------------------- 00:22:01.988 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:01.989 09:23:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:22:02.248 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:02.248 fio-3.35 00:22:02.248 Starting 1 thread 00:22:04.812 00:22:04.812 test: (groupid=0, jobs=1): err= 0: pid=82625: Fri Dec 13 09:23:58 2024 00:22:04.812 read: IOPS=7077, BW=111MiB/s (116MB/s)(222MiB/2008msec) 00:22:04.812 slat (usec): min=3, max=204, avg= 4.54, stdev= 3.07 00:22:04.812 clat (usec): min=2342, max=27358, avg=10065.89, stdev=3243.54 00:22:04.812 lat (usec): min=2346, max=27362, avg=10070.43, stdev=3243.74 00:22:04.812 clat percentiles (usec): 00:22:04.812 | 1.00th=[ 4621], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7177], 00:22:04.812 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10552], 00:22:04.812 | 70.00th=[11338], 80.00th=[12649], 90.00th=[14484], 95.00th=[16450], 00:22:04.812 | 99.00th=[18744], 99.50th=[19792], 99.90th=[25560], 99.95th=[26346], 00:22:04.812 | 99.99th=[27395] 00:22:04.812 bw ( KiB/s): min=52000, max=63328, per=50.68%, avg=57392.00, stdev=4637.94, samples=4 00:22:04.812 iops : min= 3250, max= 3958, avg=3587.00, stdev=289.87, samples=4 00:22:04.812 write: IOPS=4032, BW=63.0MiB/s (66.1MB/s)(118MiB/1865msec); 0 zone resets 00:22:04.812 slat (usec): min=32, max=227, avg=40.30, stdev=10.53 00:22:04.812 clat (usec): min=7793, max=31455, avg=14221.84, stdev=2956.81 00:22:04.812 lat (usec): min=7826, max=31488, avg=14262.14, stdev=2959.60 00:22:04.812 clat percentiles (usec): 00:22:04.812 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10814], 20.00th=[11600], 00:22:04.812 | 30.00th=[12387], 40.00th=[13173], 50.00th=[13829], 60.00th=[14615], 00:22:04.812 | 70.00th=[15401], 80.00th=[16581], 90.00th=[17957], 95.00th=[19530], 00:22:04.812 | 99.00th=[21890], 99.50th=[23725], 99.90th=[30540], 99.95th=[31327], 00:22:04.812 | 99.99th=[31327] 00:22:04.812 bw ( KiB/s): min=55264, max=65568, per=92.36%, avg=59584.00, stdev=4316.17, samples=4 00:22:04.812 iops : min= 3454, max= 4098, avg=3724.00, stdev=269.76, samples=4 00:22:04.812 lat (msec) : 4=0.20%, 10=36.13%, 20=62.12%, 50=1.55% 00:22:04.812 cpu : usr=83.02%, sys=12.45%, ctx=91, majf=0, minf=2195 00:22:04.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:04.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:04.812 issued rwts: total=14212,7520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:04.812 00:22:04.813 Run status group 0 (all jobs): 00:22:04.813 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=222MiB (233MB), run=2008-2008msec 00:22:04.813 WRITE: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=118MiB (123MB), run=1865-1865msec 00:22:04.813 ----------------------------------------------------- 00:22:04.813 Suppressions used: 00:22:04.813 count bytes template 00:22:04.813 1 57 /usr/src/fio/parse.c 00:22:04.813 528 50688 /usr/src/fio/iolog.c 00:22:04.813 1 8 libtcmalloc_minimal.so 00:22:04.813 ----------------------------------------------------- 00:22:04.813 00:22:04.813 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:05.072 09:23:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:22:05.330 Nvme0n1 00:22:05.330 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:22:05.588 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=ad945861-68e2-4140-83c1-c626c49933e2 00:22:05.588 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb ad945861-68e2-4140-83c1-c626c49933e2 00:22:05.588 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=ad945861-68e2-4140-83c1-c626c49933e2 00:22:05.588 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:22:05.588 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:22:05.588 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:22:05.588 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:05.847 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:22:05.847 { 00:22:05.847 "uuid": "ad945861-68e2-4140-83c1-c626c49933e2", 00:22:05.847 "name": "lvs_0", 00:22:05.847 "base_bdev": "Nvme0n1", 00:22:05.847 "total_data_clusters": 4, 00:22:05.847 "free_clusters": 4, 00:22:05.847 "block_size": 4096, 00:22:05.847 "cluster_size": 1073741824 00:22:05.847 } 00:22:05.847 ]' 00:22:05.847 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="ad945861-68e2-4140-83c1-c626c49933e2") .free_clusters' 00:22:05.847 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:22:05.847 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="ad945861-68e2-4140-83c1-c626c49933e2") .cluster_size' 00:22:06.106 4096 00:22:06.106 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:22:06.106 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:22:06.106 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:22:06.106 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:22:06.106 6b37a0d2-65f1-4184-8414-71ababb465cd 00:22:06.106 09:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:22:06.674 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:22:06.674 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:22:06.932 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:06.932 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:06.932 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:06.933 09:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:07.192 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:07.192 fio-3.35 00:22:07.192 Starting 1 thread 00:22:09.725 00:22:09.725 test: (groupid=0, jobs=1): err= 0: pid=82733: Fri Dec 13 09:24:03 2024 00:22:09.725 read: IOPS=5181, BW=20.2MiB/s (21.2MB/s)(40.7MiB/2010msec) 00:22:09.725 slat (usec): min=2, max=346, avg= 3.91, stdev= 4.96 00:22:09.725 clat (usec): min=3698, max=24521, avg=12889.21, stdev=1113.10 00:22:09.725 lat (usec): min=3709, max=24525, avg=12893.12, stdev=1112.66 00:22:09.725 clat percentiles (usec): 00:22:09.725 | 1.00th=[10683], 5.00th=[11338], 10.00th=[11600], 20.00th=[11994], 00:22:09.725 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:22:09.725 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14222], 95.00th=[14615], 00:22:09.725 | 99.00th=[15401], 99.50th=[16188], 99.90th=[20579], 99.95th=[22152], 00:22:09.725 | 99.99th=[22152] 00:22:09.725 bw ( KiB/s): min=20144, max=20912, per=99.86%, avg=20698.00, stdev=371.08, samples=4 00:22:09.725 iops : min= 5036, max= 5228, avg=5174.50, stdev=92.77, samples=4 00:22:09.725 write: IOPS=5181, BW=20.2MiB/s (21.2MB/s)(40.7MiB/2010msec); 0 zone resets 00:22:09.725 slat (usec): min=2, max=268, avg= 4.00, stdev= 3.85 00:22:09.725 clat (usec): min=3161, max=22370, avg=11735.10, stdev=1060.26 00:22:09.725 lat (usec): min=3179, max=22374, avg=11739.10, stdev=1059.98 00:22:09.725 clat percentiles (usec): 00:22:09.725 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:22:09.725 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:22:09.725 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13304], 00:22:09.725 | 99.00th=[14222], 99.50th=[14746], 99.90th=[19268], 99.95th=[20841], 00:22:09.725 | 99.99th=[22414] 00:22:09.725 bw ( KiB/s): min=20448, max=21032, per=99.91%, avg=20706.00, stdev=288.03, samples=4 00:22:09.725 iops : min= 5112, max= 5258, avg=5176.50, stdev=72.01, samples=4 00:22:09.725 lat (msec) : 4=0.02%, 10=1.40%, 20=98.46%, 50=0.12% 00:22:09.725 cpu : usr=74.37%, sys=19.86%, ctx=6, majf=0, minf=1553 00:22:09.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:09.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:09.725 issued rwts: total=10415,10414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:09.725 00:22:09.725 Run status group 0 (all jobs): 00:22:09.725 READ: bw=20.2MiB/s (21.2MB/s), 20.2MiB/s-20.2MiB/s (21.2MB/s-21.2MB/s), io=40.7MiB (42.7MB), run=2010-2010msec 00:22:09.725 WRITE: bw=20.2MiB/s (21.2MB/s), 20.2MiB/s-20.2MiB/s (21.2MB/s-21.2MB/s), io=40.7MiB (42.7MB), run=2010-2010msec 00:22:09.725 ----------------------------------------------------- 00:22:09.725 Suppressions used: 00:22:09.725 count bytes template 00:22:09.725 1 58 /usr/src/fio/parse.c 00:22:09.725 1 8 libtcmalloc_minimal.so 00:22:09.725 ----------------------------------------------------- 00:22:09.725 00:22:09.984 09:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:09.984 09:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:22:10.242 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=2c0f66d5-8278-49ed-bc2b-e38c63e26c2c 00:22:10.242 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 2c0f66d5-8278-49ed-bc2b-e38c63e26c2c 00:22:10.242 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=2c0f66d5-8278-49ed-bc2b-e38c63e26c2c 00:22:10.242 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:22:10.242 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:22:10.242 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:22:10.242 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:10.501 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:22:10.501 { 00:22:10.501 "uuid": "ad945861-68e2-4140-83c1-c626c49933e2", 00:22:10.501 "name": "lvs_0", 00:22:10.501 "base_bdev": "Nvme0n1", 00:22:10.501 "total_data_clusters": 4, 00:22:10.501 "free_clusters": 0, 00:22:10.501 "block_size": 4096, 00:22:10.501 "cluster_size": 1073741824 00:22:10.501 }, 00:22:10.501 { 00:22:10.501 "uuid": "2c0f66d5-8278-49ed-bc2b-e38c63e26c2c", 00:22:10.501 "name": "lvs_n_0", 00:22:10.501 "base_bdev": "6b37a0d2-65f1-4184-8414-71ababb465cd", 00:22:10.501 "total_data_clusters": 1022, 00:22:10.501 "free_clusters": 1022, 00:22:10.501 "block_size": 4096, 00:22:10.501 "cluster_size": 4194304 00:22:10.501 } 00:22:10.501 ]' 00:22:10.760 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2c0f66d5-8278-49ed-bc2b-e38c63e26c2c") .free_clusters' 00:22:10.760 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:22:10.760 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2c0f66d5-8278-49ed-bc2b-e38c63e26c2c") .cluster_size' 00:22:10.760 4088 00:22:10.760 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:22:10.760 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:22:10.760 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:22:10.760 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:22:11.020 b5c30696-6028-4822-a40c-898d205cd2a4 00:22:11.020 09:24:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:22:11.279 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:22:11.537 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:11.796 09:24:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:12.054 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:12.054 fio-3.35 00:22:12.054 Starting 1 thread 00:22:14.588 00:22:14.588 test: (groupid=0, jobs=1): err= 0: pid=82809: Fri Dec 13 09:24:08 2024 00:22:14.588 read: IOPS=4644, BW=18.1MiB/s (19.0MB/s)(36.5MiB/2011msec) 00:22:14.588 slat (usec): min=2, max=168, avg= 3.50, stdev= 3.27 00:22:14.588 clat (usec): min=3558, max=25121, avg=14366.92, stdev=1221.84 00:22:14.588 lat (usec): min=3563, max=25124, avg=14370.42, stdev=1221.67 00:22:14.588 clat percentiles (usec): 00:22:14.588 | 1.00th=[11863], 5.00th=[12649], 10.00th=[13042], 20.00th=[13435], 00:22:14.588 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:22:14.588 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:22:14.588 | 99.00th=[16909], 99.50th=[17695], 99.90th=[22938], 99.95th=[23462], 00:22:14.588 | 99.99th=[25035] 00:22:14.588 bw ( KiB/s): min=17724, max=18920, per=99.75%, avg=18531.00, stdev=550.74, samples=4 00:22:14.588 iops : min= 4431, max= 4730, avg=4632.75, stdev=137.68, samples=4 00:22:14.588 write: IOPS=4643, BW=18.1MiB/s (19.0MB/s)(36.5MiB/2011msec); 0 zone resets 00:22:14.588 slat (usec): min=2, max=135, avg= 3.56, stdev= 2.93 00:22:14.588 clat (usec): min=2186, max=24790, avg=13025.03, stdev=1166.64 00:22:14.588 lat (usec): min=2194, max=24793, avg=13028.59, stdev=1166.51 00:22:14.588 clat percentiles (usec): 00:22:14.588 | 1.00th=[10683], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:22:14.588 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:22:14.588 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14353], 95.00th=[14746], 00:22:14.588 | 99.00th=[15401], 99.50th=[15795], 99.90th=[22938], 99.95th=[23462], 00:22:14.588 | 99.99th=[24773] 00:22:14.588 bw ( KiB/s): min=18368, max=18658, per=99.88%, avg=18552.50, stdev=129.54, samples=4 00:22:14.588 iops : min= 4592, max= 4664, avg=4638.00, stdev=32.25, samples=4 00:22:14.588 lat (msec) : 4=0.05%, 10=0.35%, 20=99.38%, 50=0.22% 00:22:14.588 cpu : usr=75.12%, sys=19.90%, ctx=5, majf=0, minf=1553 00:22:14.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:14.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:14.588 issued rwts: total=9340,9338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:14.588 00:22:14.588 Run status group 0 (all jobs): 00:22:14.588 READ: bw=18.1MiB/s (19.0MB/s), 18.1MiB/s-18.1MiB/s (19.0MB/s-19.0MB/s), io=36.5MiB (38.3MB), run=2011-2011msec 00:22:14.588 WRITE: bw=18.1MiB/s (19.0MB/s), 18.1MiB/s-18.1MiB/s (19.0MB/s-19.0MB/s), io=36.5MiB (38.2MB), run=2011-2011msec 00:22:14.588 ----------------------------------------------------- 00:22:14.588 Suppressions used: 00:22:14.588 count bytes template 00:22:14.588 1 58 /usr/src/fio/parse.c 00:22:14.588 1 8 libtcmalloc_minimal.so 00:22:14.588 ----------------------------------------------------- 00:22:14.588 00:22:14.588 09:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:14.847 09:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:22:14.847 09:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:22:15.105 09:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:22:15.673 09:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:22:15.673 09:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:22:15.932 09:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:16.869 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:16.869 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.870 rmmod nvme_tcp 00:22:16.870 rmmod nvme_fabrics 00:22:16.870 rmmod nvme_keyring 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 82509 ']' 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 82509 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 82509 ']' 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 82509 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82509 00:22:16.870 killing process with pid 82509 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82509' 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 82509 00:22:16.870 09:24:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 82509 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:17.807 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:22:18.066 00:22:18.066 real 0m22.333s 00:22:18.066 user 1m36.324s 00:22:18.066 sys 0m4.658s 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.066 ************************************ 00:22:18.066 END TEST nvmf_fio_host 00:22:18.066 ************************************ 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.066 ************************************ 00:22:18.066 START TEST nvmf_failover 00:22:18.066 ************************************ 00:22:18.066 09:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:18.327 * Looking for test storage... 00:22:18.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:18.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.327 --rc genhtml_branch_coverage=1 00:22:18.327 --rc genhtml_function_coverage=1 00:22:18.327 --rc genhtml_legend=1 00:22:18.327 --rc geninfo_all_blocks=1 00:22:18.327 --rc geninfo_unexecuted_blocks=1 00:22:18.327 00:22:18.327 ' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:18.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.327 --rc genhtml_branch_coverage=1 00:22:18.327 --rc genhtml_function_coverage=1 00:22:18.327 --rc genhtml_legend=1 00:22:18.327 --rc geninfo_all_blocks=1 00:22:18.327 --rc geninfo_unexecuted_blocks=1 00:22:18.327 00:22:18.327 ' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:18.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.327 --rc genhtml_branch_coverage=1 00:22:18.327 --rc genhtml_function_coverage=1 00:22:18.327 --rc genhtml_legend=1 00:22:18.327 --rc geninfo_all_blocks=1 00:22:18.327 --rc geninfo_unexecuted_blocks=1 00:22:18.327 00:22:18.327 ' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:18.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.327 --rc genhtml_branch_coverage=1 00:22:18.327 --rc genhtml_function_coverage=1 00:22:18.327 --rc genhtml_legend=1 00:22:18.327 --rc geninfo_all_blocks=1 00:22:18.327 --rc geninfo_unexecuted_blocks=1 00:22:18.327 00:22:18.327 ' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:18.327 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:18.327 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:18.328 Cannot find device "nvmf_init_br" 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:18.328 Cannot find device "nvmf_init_br2" 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:18.328 Cannot find device "nvmf_tgt_br" 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:18.328 Cannot find device "nvmf_tgt_br2" 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:22:18.328 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:18.587 Cannot find device "nvmf_init_br" 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:18.587 Cannot find device "nvmf_init_br2" 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:18.587 Cannot find device "nvmf_tgt_br" 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:18.587 Cannot find device "nvmf_tgt_br2" 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:18.587 Cannot find device "nvmf_br" 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:18.587 Cannot find device "nvmf_init_if" 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:18.587 Cannot find device "nvmf_init_if2" 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:18.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:18.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:18.587 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:18.847 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:18.847 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:18.847 00:22:18.847 --- 10.0.0.3 ping statistics --- 00:22:18.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.847 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:18.847 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:18.847 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:22:18.847 00:22:18.847 --- 10.0.0.4 ping statistics --- 00:22:18.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.847 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:18.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:18.847 00:22:18.847 --- 10.0.0.1 ping statistics --- 00:22:18.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.847 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:18.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:22:18.847 00:22:18.847 --- 10.0.0.2 ping statistics --- 00:22:18.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.847 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=83111 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 83111 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 83111 ']' 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.847 09:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:18.847 [2024-12-13 09:24:12.685259] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:18.847 [2024-12-13 09:24:12.685447] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.106 [2024-12-13 09:24:12.875698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:19.365 [2024-12-13 09:24:13.003652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.365 [2024-12-13 09:24:13.003722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.365 [2024-12-13 09:24:13.003747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.365 [2024-12-13 09:24:13.003764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.365 [2024-12-13 09:24:13.003784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.365 [2024-12-13 09:24:13.005925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.365 [2024-12-13 09:24:13.006040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.365 [2024-12-13 09:24:13.006043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.365 [2024-12-13 09:24:13.183321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:19.932 09:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.932 09:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:19.932 09:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:19.932 09:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.932 09:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:19.932 09:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.932 09:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:20.190 [2024-12-13 09:24:13.997566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.190 09:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:20.449 Malloc0 00:22:20.449 09:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.708 09:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:20.966 09:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:21.225 [2024-12-13 09:24:15.006892] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:21.225 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:21.486 [2024-12-13 09:24:15.235110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:21.486 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:21.747 [2024-12-13 09:24:15.475431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:21.747 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:21.747 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=83169 00:22:21.747 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.747 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 83169 /var/tmp/bdevperf.sock 00:22:21.747 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 83169 ']' 00:22:21.747 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.747 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.747 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.747 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.747 09:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:22.682 09:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.682 09:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:22.682 09:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:22.941 NVMe0n1 00:22:22.941 09:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:23.509 00:22:23.509 09:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=83197 00:22:23.509 09:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:23.509 09:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:24.445 09:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:24.704 09:24:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:28.033 09:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:28.033 00:22:28.033 09:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:28.292 [2024-12-13 09:24:21.955038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:22:28.292 09:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:31.582 09:24:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:31.582 [2024-12-13 09:24:25.236383] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:31.582 09:24:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:32.518 09:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:32.777 09:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 83197 00:22:39.352 { 00:22:39.352 "results": [ 00:22:39.352 { 00:22:39.352 "job": "NVMe0n1", 00:22:39.352 "core_mask": "0x1", 00:22:39.352 "workload": "verify", 00:22:39.352 "status": "finished", 00:22:39.352 "verify_range": { 00:22:39.352 "start": 0, 00:22:39.352 "length": 16384 00:22:39.352 }, 00:22:39.352 "queue_depth": 128, 00:22:39.352 "io_size": 4096, 00:22:39.352 "runtime": 15.010258, 00:22:39.352 "iops": 8176.475047930555, 00:22:39.352 "mibps": 31.93935565597873, 00:22:39.352 "io_failed": 3173, 00:22:39.352 "io_timeout": 0, 00:22:39.352 "avg_latency_us": 15228.230936124492, 00:22:39.352 "min_latency_us": 610.6763636363636, 00:22:39.352 "max_latency_us": 21328.98909090909 00:22:39.352 } 00:22:39.352 ], 00:22:39.352 "core_count": 1 00:22:39.352 } 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 83169 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 83169 ']' 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 83169 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83169 00:22:39.352 killing process with pid 83169 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83169' 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 83169 00:22:39.352 09:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 83169 00:22:39.352 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:39.352 [2024-12-13 09:24:15.575319] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:39.352 [2024-12-13 09:24:15.575509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83169 ] 00:22:39.352 [2024-12-13 09:24:15.748698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.352 [2024-12-13 09:24:15.872697] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.352 [2024-12-13 09:24:16.046152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:39.352 Running I/O for 15 seconds... 00:22:39.352 6435.00 IOPS, 25.14 MiB/s [2024-12-13T09:24:33.242Z] [2024-12-13 09:24:18.349436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.352 [2024-12-13 09:24:18.349532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.352 [2024-12-13 09:24:18.349573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.352 [2024-12-13 09:24:18.349601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.352 [2024-12-13 09:24:18.349624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.352 [2024-12-13 09:24:18.349645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.352 [2024-12-13 09:24:18.349667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.352 [2024-12-13 09:24:18.349688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.352 [2024-12-13 09:24:18.349709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.352 [2024-12-13 09:24:18.349730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.352 [2024-12-13 09:24:18.349750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.352 [2024-12-13 09:24:18.349771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.349792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.349812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.349833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.349854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.349875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.353 [2024-12-13 09:24:18.349896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.349916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.349941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.349962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.350941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.350964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.353 [2024-12-13 09:24:18.351684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.353 [2024-12-13 09:24:18.351705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.351730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.351751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.351773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.351793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.351823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.351845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.351879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.351900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.351922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.351943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.351964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.351984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.352973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.352996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.354 [2024-12-13 09:24:18.353534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.354 [2024-12-13 09:24:18.353556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.353577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.353599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.353620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.353642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.353664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.353688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.353709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.353730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.353751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.353772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.353793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.353815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.353835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.353857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.353878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.353901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.353922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.353943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.353964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.353987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.355 [2024-12-13 09:24:18.354727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.354772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.354814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.354904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.354956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.354982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.355030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.355077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.355132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.355181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.355231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.355294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.355349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.355422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.355464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.355 [2024-12-13 09:24:18.355509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.355 [2024-12-13 09:24:18.355529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:22:39.355 [2024-12-13 09:24:18.355555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.355 [2024-12-13 09:24:18.355572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.356 [2024-12-13 09:24:18.355590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58760 len:8 PRP1 0x0 PRP2 0x0 00:22:39.356 [2024-12-13 09:24:18.355609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:18.355861] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:39.356 [2024-12-13 09:24:18.355939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.356 [2024-12-13 09:24:18.355968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:18.355988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.356 [2024-12-13 09:24:18.356006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:18.356024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.356 [2024-12-13 09:24:18.356041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:18.356059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.356 [2024-12-13 09:24:18.356076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:18.356098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:39.356 [2024-12-13 09:24:18.356186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:39.356 [2024-12-13 09:24:18.360018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:39.356 [2024-12-13 09:24:18.389040] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:39.356 7101.50 IOPS, 27.74 MiB/s [2024-12-13T09:24:33.246Z] 7507.67 IOPS, 29.33 MiB/s [2024-12-13T09:24:33.246Z] 7720.25 IOPS, 30.16 MiB/s [2024-12-13T09:24:33.246Z] [2024-12-13 09:24:21.955673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.356 [2024-12-13 09:24:21.955759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.955825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.356 [2024-12-13 09:24:21.955847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.955869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.955888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.955923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.955940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.955958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.955976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.955994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.356 [2024-12-13 09:24:21.956509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.356 [2024-12-13 09:24:21.956545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.356 [2024-12-13 09:24:21.956581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.356 [2024-12-13 09:24:21.956617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.356 [2024-12-13 09:24:21.956653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.356 [2024-12-13 09:24:21.956689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.356 [2024-12-13 09:24:21.956725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.356 [2024-12-13 09:24:21.956761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.356 [2024-12-13 09:24:21.956807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.356 [2024-12-13 09:24:21.956826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.956843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.956862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.956880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.956898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.956915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.956934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.956951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.956988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.957006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.957044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.957080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.957975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.957993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.958012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.958029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.958048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.958066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.958085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.958102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.958121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.958138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.958157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.958176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.958195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.958212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.958239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.958257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.958303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.357 [2024-12-13 09:24:21.958324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.958344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.357 [2024-12-13 09:24:21.958362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.357 [2024-12-13 09:24:21.958382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.958960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.958980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.959000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.358 [2024-12-13 09:24:21.959836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.959885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.959925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.959964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.959984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.960002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.960022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.960041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.358 [2024-12-13 09:24:21.960062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.358 [2024-12-13 09:24:21.960081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.359 [2024-12-13 09:24:21.960119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(6) to be set 00:22:39.359 [2024-12-13 09:24:21.960162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43168 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43592 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43600 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43608 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43616 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43624 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43632 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43640 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43648 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43656 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43664 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43672 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.960943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.960960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.960974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.960988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43680 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.961005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.961037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.961052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43688 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.961068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.961099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.961114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43696 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.961131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.961161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.961176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43704 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.961193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.961228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.961243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43712 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.961261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.961306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.961320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43720 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.961337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.359 [2024-12-13 09:24:21.961368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.359 [2024-12-13 09:24:21.961391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43728 len:8 PRP1 0x0 PRP2 0x0 00:22:39.359 [2024-12-13 09:24:21.961410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961650] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:22:39.359 [2024-12-13 09:24:21.961721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.359 [2024-12-13 09:24:21.961750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.359 [2024-12-13 09:24:21.961790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.359 [2024-12-13 09:24:21.961825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.359 [2024-12-13 09:24:21.961861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.359 [2024-12-13 09:24:21.961879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:39.359 [2024-12-13 09:24:21.961930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:39.359 [2024-12-13 09:24:21.965650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:39.359 [2024-12-13 09:24:21.989254] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:39.359 7763.60 IOPS, 30.33 MiB/s [2024-12-13T09:24:33.249Z] 7883.00 IOPS, 30.79 MiB/s [2024-12-13T09:24:33.249Z] 7982.00 IOPS, 31.18 MiB/s [2024-12-13T09:24:33.249Z] 8036.25 IOPS, 31.39 MiB/s [2024-12-13T09:24:33.249Z] 8080.22 IOPS, 31.56 MiB/s [2024-12-13T09:24:33.250Z] [2024-12-13 09:24:26.511751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.511839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.511876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.511897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.511918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.511936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.511956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.511974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.511993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.360 [2024-12-13 09:24:26.512521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.360 [2024-12-13 09:24:26.512569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.360 [2024-12-13 09:24:26.512608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.360 [2024-12-13 09:24:26.512646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.360 [2024-12-13 09:24:26.512683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.360 [2024-12-13 09:24:26.512720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.360 [2024-12-13 09:24:26.512757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.360 [2024-12-13 09:24:26.512793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.512976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.512995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.360 [2024-12-13 09:24:26.513423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.360 [2024-12-13 09:24:26.513442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.360 [2024-12-13 09:24:26.513460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.513497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.513534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.513581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.513617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.513654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.513691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.513727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.513765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.513802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.513839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.513883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.513921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.513958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.513977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.513994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.514037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.514076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.514113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.514149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.514185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.514221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.514257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.514308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.361 [2024-12-13 09:24:26.514346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.361 [2024-12-13 09:24:26.514812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.361 [2024-12-13 09:24:26.514892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.514913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.514935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.514955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.514976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.514996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.515038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.362 [2024-12-13 09:24:26.515782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.515819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.515857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.515895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.515932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.515970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.515990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.516008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.516045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.516083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.516120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.516158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.516205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.516243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.516281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.516351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.362 [2024-12-13 09:24:26.516390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:22:39.362 [2024-12-13 09:24:26.516436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.362 [2024-12-13 09:24:26.516452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.362 [2024-12-13 09:24:26.516468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94648 len:8 PRP1 0x0 PRP2 0x0 00:22:39.362 [2024-12-13 09:24:26.516486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.362 [2024-12-13 09:24:26.516519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.362 [2024-12-13 09:24:26.516534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95168 len:8 PRP1 0x0 PRP2 0x0 00:22:39.362 [2024-12-13 09:24:26.516551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.362 [2024-12-13 09:24:26.516582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.362 [2024-12-13 09:24:26.516596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95176 len:8 PRP1 0x0 PRP2 0x0 00:22:39.362 [2024-12-13 09:24:26.516613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.362 [2024-12-13 09:24:26.516630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.362 [2024-12-13 09:24:26.516644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.362 [2024-12-13 09:24:26.516659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95184 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.516676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.516693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.516730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.516746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95192 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.516763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.516780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.516793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.516807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95200 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.516824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.516840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.516854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.516868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95208 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.516884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.516901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.516914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.516928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95216 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.516946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.516964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.516977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.516991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95224 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.517008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.517038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.517051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95232 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.517068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.517098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.517112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95240 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.517142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.517174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.517188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95248 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.517204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.517245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.517260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95256 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.517278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.517326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.517344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95264 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.517361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.517393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.517406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95272 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.517423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.517453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.517468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95280 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.517487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.363 [2024-12-13 09:24:26.517517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.363 [2024-12-13 09:24:26.517531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95288 len:8 PRP1 0x0 PRP2 0x0 00:22:39.363 [2024-12-13 09:24:26.517548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517780] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:22:39.363 [2024-12-13 09:24:26.517849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.363 [2024-12-13 09:24:26.517876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.363 [2024-12-13 09:24:26.517915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.363 [2024-12-13 09:24:26.517950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.517968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.363 [2024-12-13 09:24:26.517985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.363 [2024-12-13 09:24:26.518014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:39.363 [2024-12-13 09:24:26.518100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:39.363 [2024-12-13 09:24:26.521852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:39.363 [2024-12-13 09:24:26.544477] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:39.363 8071.90 IOPS, 31.53 MiB/s [2024-12-13T09:24:33.253Z] 8094.45 IOPS, 31.62 MiB/s [2024-12-13T09:24:33.253Z] 8119.58 IOPS, 31.72 MiB/s [2024-12-13T09:24:33.253Z] 8142.92 IOPS, 31.81 MiB/s [2024-12-13T09:24:33.253Z] 8162.36 IOPS, 31.88 MiB/s [2024-12-13T09:24:33.253Z] 8176.73 IOPS, 31.94 MiB/s 00:22:39.363 Latency(us) 00:22:39.363 [2024-12-13T09:24:33.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.363 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:39.363 Verification LBA range: start 0x0 length 0x4000 00:22:39.363 NVMe0n1 : 15.01 8176.48 31.94 211.39 0.00 15228.23 610.68 21328.99 00:22:39.363 [2024-12-13T09:24:33.253Z] =================================================================================================================== 00:22:39.363 [2024-12-13T09:24:33.253Z] Total : 8176.48 31.94 211.39 0.00 15228.23 610.68 21328.99 00:22:39.363 Received shutdown signal, test time was about 15.000000 seconds 00:22:39.363 00:22:39.363 Latency(us) 00:22:39.363 [2024-12-13T09:24:33.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.363 [2024-12-13T09:24:33.253Z] =================================================================================================================== 00:22:39.363 [2024-12-13T09:24:33.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:39.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=83372 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 83372 /var/tmp/bdevperf.sock 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 83372 ']' 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.363 09:24:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.741 09:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.742 09:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:40.742 09:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:40.742 [2024-12-13 09:24:34.477271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:40.742 09:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:41.000 [2024-12-13 09:24:34.709428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:41.000 09:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:41.260 NVMe0n1 00:22:41.260 09:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:41.518 00:22:41.518 09:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:42.085 00:22:42.085 09:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:42.085 09:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:42.344 09:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.603 09:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:45.892 09:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.892 09:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:45.892 09:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=83449 00:22:45.892 09:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:45.892 09:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 83449 00:22:47.272 { 00:22:47.272 "results": [ 00:22:47.272 { 00:22:47.272 "job": "NVMe0n1", 00:22:47.272 "core_mask": "0x1", 00:22:47.272 "workload": "verify", 00:22:47.272 "status": "finished", 00:22:47.272 "verify_range": { 00:22:47.272 "start": 0, 00:22:47.272 "length": 16384 00:22:47.272 }, 00:22:47.272 "queue_depth": 128, 00:22:47.272 "io_size": 4096, 00:22:47.272 "runtime": 1.011846, 00:22:47.272 "iops": 7052.456599126745, 00:22:47.272 "mibps": 27.548658590338846, 00:22:47.272 "io_failed": 0, 00:22:47.272 "io_timeout": 0, 00:22:47.272 "avg_latency_us": 18041.31874439462, 00:22:47.272 "min_latency_us": 1303.2727272727273, 00:22:47.272 "max_latency_us": 17158.516363636365 00:22:47.272 } 00:22:47.272 ], 00:22:47.272 "core_count": 1 00:22:47.272 } 00:22:47.272 09:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:47.272 [2024-12-13 09:24:33.273159] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:47.272 [2024-12-13 09:24:33.273362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83372 ] 00:22:47.272 [2024-12-13 09:24:33.446727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.272 [2024-12-13 09:24:33.542742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.272 [2024-12-13 09:24:33.697603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:47.272 [2024-12-13 09:24:36.283458] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:47.272 [2024-12-13 09:24:36.283603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.272 [2024-12-13 09:24:36.283639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.272 [2024-12-13 09:24:36.283666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.272 [2024-12-13 09:24:36.283687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.272 [2024-12-13 09:24:36.283705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.272 [2024-12-13 09:24:36.283724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.272 [2024-12-13 09:24:36.283743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.272 [2024-12-13 09:24:36.283773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.272 [2024-12-13 09:24:36.283797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:47.272 [2024-12-13 09:24:36.283872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:47.272 [2024-12-13 09:24:36.283917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:47.272 [2024-12-13 09:24:36.289700] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:47.272 Running I/O for 1 seconds... 00:22:47.272 6992.00 IOPS, 27.31 MiB/s 00:22:47.272 Latency(us) 00:22:47.272 [2024-12-13T09:24:41.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.272 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:47.272 Verification LBA range: start 0x0 length 0x4000 00:22:47.272 NVMe0n1 : 1.01 7052.46 27.55 0.00 0.00 18041.32 1303.27 17158.52 00:22:47.272 [2024-12-13T09:24:41.162Z] =================================================================================================================== 00:22:47.272 [2024-12-13T09:24:41.162Z] Total : 7052.46 27.55 0.00 0.00 18041.32 1303.27 17158.52 00:22:47.272 09:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:47.272 09:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:47.272 09:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:47.531 09:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:47.531 09:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:47.790 09:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:48.049 09:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:51.341 09:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.341 09:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:51.341 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 83372 00:22:51.341 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 83372 ']' 00:22:51.341 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 83372 00:22:51.341 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:51.342 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.342 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83372 00:22:51.342 killing process with pid 83372 00:22:51.342 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:51.342 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:51.342 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83372' 00:22:51.342 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 83372 00:22:51.342 09:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 83372 00:22:52.280 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:52.280 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.539 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:52.539 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:52.539 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:52.539 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:52.539 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:52.539 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.539 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:52.539 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.539 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.539 rmmod nvme_tcp 00:22:52.539 rmmod nvme_fabrics 00:22:52.539 rmmod nvme_keyring 00:22:52.539 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 83111 ']' 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 83111 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 83111 ']' 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 83111 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83111 00:22:52.799 killing process with pid 83111 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83111' 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 83111 00:22:52.799 09:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 83111 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:53.736 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:53.995 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:53.995 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.995 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:22:53.996 00:22:53.996 real 0m35.744s 00:22:53.996 user 2m16.561s 00:22:53.996 sys 0m5.631s 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:53.996 ************************************ 00:22:53.996 END TEST nvmf_failover 00:22:53.996 ************************************ 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.996 ************************************ 00:22:53.996 START TEST nvmf_host_discovery 00:22:53.996 ************************************ 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:53.996 * Looking for test storage... 00:22:53.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:53.996 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:54.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.256 --rc genhtml_branch_coverage=1 00:22:54.256 --rc genhtml_function_coverage=1 00:22:54.256 --rc genhtml_legend=1 00:22:54.256 --rc geninfo_all_blocks=1 00:22:54.256 --rc geninfo_unexecuted_blocks=1 00:22:54.256 00:22:54.256 ' 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:54.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.256 --rc genhtml_branch_coverage=1 00:22:54.256 --rc genhtml_function_coverage=1 00:22:54.256 --rc genhtml_legend=1 00:22:54.256 --rc geninfo_all_blocks=1 00:22:54.256 --rc geninfo_unexecuted_blocks=1 00:22:54.256 00:22:54.256 ' 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:54.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.256 --rc genhtml_branch_coverage=1 00:22:54.256 --rc genhtml_function_coverage=1 00:22:54.256 --rc genhtml_legend=1 00:22:54.256 --rc geninfo_all_blocks=1 00:22:54.256 --rc geninfo_unexecuted_blocks=1 00:22:54.256 00:22:54.256 ' 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:54.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.256 --rc genhtml_branch_coverage=1 00:22:54.256 --rc genhtml_function_coverage=1 00:22:54.256 --rc genhtml_legend=1 00:22:54.256 --rc geninfo_all_blocks=1 00:22:54.256 --rc geninfo_unexecuted_blocks=1 00:22:54.256 00:22:54.256 ' 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.256 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.257 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:54.257 Cannot find device "nvmf_init_br" 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:54.257 Cannot find device "nvmf_init_br2" 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:54.257 Cannot find device "nvmf_tgt_br" 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.257 Cannot find device "nvmf_tgt_br2" 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:54.257 Cannot find device "nvmf_init_br" 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:22:54.257 09:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:54.257 Cannot find device "nvmf_init_br2" 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:54.257 Cannot find device "nvmf_tgt_br" 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:54.257 Cannot find device "nvmf_tgt_br2" 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:54.257 Cannot find device "nvmf_br" 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:54.257 Cannot find device "nvmf_init_if" 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:54.257 Cannot find device "nvmf_init_if2" 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:54.257 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:54.523 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:54.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:54.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:22:54.524 00:22:54.524 --- 10.0.0.3 ping statistics --- 00:22:54.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.524 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:54.524 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:54.524 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:22:54.524 00:22:54.524 --- 10.0.0.4 ping statistics --- 00:22:54.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.524 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:54.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:54.524 00:22:54.524 --- 10.0.0.1 ping statistics --- 00:22:54.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.524 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:54.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:22:54.524 00:22:54.524 --- 10.0.0.2 ping statistics --- 00:22:54.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.524 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=83786 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 83786 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 83786 ']' 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.524 09:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.814 [2024-12-13 09:24:48.488196] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:54.814 [2024-12-13 09:24:48.488379] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.814 [2024-12-13 09:24:48.678140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.084 [2024-12-13 09:24:48.801827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.084 [2024-12-13 09:24:48.801889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.084 [2024-12-13 09:24:48.801913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.084 [2024-12-13 09:24:48.801943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.084 [2024-12-13 09:24:48.801962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.084 [2024-12-13 09:24:48.803420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.344 [2024-12-13 09:24:49.017023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:55.602 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.602 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:55.602 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.602 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.602 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.862 [2024-12-13 09:24:49.513405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.862 [2024-12-13 09:24:49.521673] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.862 null0 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.862 null1 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=83818 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 83818 /tmp/host.sock 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 83818 ']' 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.862 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.862 09:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:55.862 [2024-12-13 09:24:49.671594] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:55.862 [2024-12-13 09:24:49.671762] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83818 ] 00:22:56.121 [2024-12-13 09:24:49.856097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.121 [2024-12-13 09:24:49.968026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.380 [2024-12-13 09:24:50.122434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:56.948 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.948 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:56.948 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:56.948 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:56.948 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.948 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.949 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.208 [2024-12-13 09:24:50.950012] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:57.208 09:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:57.208 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:57.209 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:57.209 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:57.209 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.209 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.209 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:57.467 09:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:57.725 [2024-12-13 09:24:51.597729] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:57.725 [2024-12-13 09:24:51.597786] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:57.725 [2024-12-13 09:24:51.597826] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:57.725 [2024-12-13 09:24:51.603795] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:22:57.984 [2024-12-13 09:24:51.666471] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:22:57.984 [2024-12-13 09:24:51.667911] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b280:1 started. 00:22:57.984 [2024-12-13 09:24:51.670090] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:57.984 [2024-12-13 09:24:51.670154] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:57.984 [2024-12-13 09:24:51.676617] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b280 was disconnected and freed. delete nvme_qpair. 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.552 [2024-12-13 09:24:52.419282] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.552 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:58.552 [2024-12-13 09:24:52.427282] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.812 [2024-12-13 09:24:52.532168] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:58.812 [2024-12-13 09:24:52.532714] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:58.812 [2024-12-13 09:24:52.532762] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:58.812 [2024-12-13 09:24:52.538738] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:58.812 [2024-12-13 09:24:52.601341] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:22:58.812 [2024-12-13 09:24:52.601412] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:58.812 [2024-12-13 09:24:52.601433] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:58.812 [2024-12-13 09:24:52.601444] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:58.812 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:58.813 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:58.813 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:58.813 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:58.813 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:58.813 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:58.813 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.813 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:58.813 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:58.813 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:59.072 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.073 [2024-12-13 09:24:52.761372] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:59.073 [2024-12-13 09:24:52.761439] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:59.073 [2024-12-13 09:24:52.766958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.073 [2024-12-13 09:24:52.767004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.073 [2024-12-13 09:24:52.767024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.073 [2024-12-13 09:24:52.767039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.073 [2024-12-13 09:24:52.767053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.073 [2024-12-13 09:24:52.767066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.073 [2024-12-13 09:24:52.767080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.073 [2024-12-13 09:24:52.767093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.073 [2024-12-13 09:24:52.767106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:22:59.073 [2024-12-13 09:24:52.767398] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:59.073 [2024-12-13 09:24:52.767432] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:59.073 [2024-12-13 09:24:52.767532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.073 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.332 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:59.332 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:59.333 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:59.333 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:59.333 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:59.333 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.333 09:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.333 09:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.712 [2024-12-13 09:24:54.198521] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:00.712 [2024-12-13 09:24:54.198572] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:00.712 [2024-12-13 09:24:54.198608] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:00.712 [2024-12-13 09:24:54.204576] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:23:00.712 [2024-12-13 09:24:54.263155] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:23:00.712 [2024-12-13 09:24:54.264352] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002c680:1 started. 00:23:00.712 [2024-12-13 09:24:54.266784] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:00.712 [2024-12-13 09:24:54.266892] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.712 [2024-12-13 09:24:54.269066] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.712 request: 00:23:00.712 { 00:23:00.712 "name": "nvme", 00:23:00.712 "trtype": "tcp", 00:23:00.712 "traddr": "10.0.0.3", 00:23:00.712 "adrfam": "ipv4", 00:23:00.712 "trsvcid": "8009", 00:23:00.712 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:00.712 "wait_for_attach": true, 00:23:00.712 "method": "bdev_nvme_start_discovery", 00:23:00.712 "req_id": 1 00:23:00.712 } 00:23:00.712 Got JSON-RPC error response 00:23:00.712 response: 00:23:00.712 { 00:23:00.712 "code": -17, 00:23:00.712 "message": "File exists" 00:23:00.712 } 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.712 request: 00:23:00.712 { 00:23:00.712 "name": "nvme_second", 00:23:00.712 "trtype": "tcp", 00:23:00.712 "traddr": "10.0.0.3", 00:23:00.712 "adrfam": "ipv4", 00:23:00.712 "trsvcid": "8009", 00:23:00.712 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:00.712 "wait_for_attach": true, 00:23:00.712 "method": "bdev_nvme_start_discovery", 00:23:00.712 "req_id": 1 00:23:00.712 } 00:23:00.712 Got JSON-RPC error response 00:23:00.712 response: 00:23:00.712 { 00:23:00.712 "code": -17, 00:23:00.712 "message": "File exists" 00:23:00.712 } 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:00.712 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.713 09:24:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:02.089 [2024-12-13 09:24:55.539475] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.089 [2024-12-13 09:24:55.539553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:23:02.089 [2024-12-13 09:24:55.539611] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:02.089 [2024-12-13 09:24:55.539627] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:02.089 [2024-12-13 09:24:55.539641] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:23:02.657 [2024-12-13 09:24:56.539476] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.657 [2024-12-13 09:24:56.539550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:23:02.657 [2024-12-13 09:24:56.539604] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:02.657 [2024-12-13 09:24:56.539633] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:02.657 [2024-12-13 09:24:56.539645] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:23:04.035 [2024-12-13 09:24:57.539215] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:23:04.035 request: 00:23:04.035 { 00:23:04.035 "name": "nvme_second", 00:23:04.035 "trtype": "tcp", 00:23:04.035 "traddr": "10.0.0.3", 00:23:04.035 "adrfam": "ipv4", 00:23:04.035 "trsvcid": "8010", 00:23:04.035 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:04.035 "wait_for_attach": false, 00:23:04.035 "attach_timeout_ms": 3000, 00:23:04.035 "method": "bdev_nvme_start_discovery", 00:23:04.035 "req_id": 1 00:23:04.035 } 00:23:04.035 Got JSON-RPC error response 00:23:04.035 response: 00:23:04.035 { 00:23:04.035 "code": -110, 00:23:04.035 "message": "Connection timed out" 00:23:04.035 } 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 83818 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:04.035 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.036 rmmod nvme_tcp 00:23:04.036 rmmod nvme_fabrics 00:23:04.036 rmmod nvme_keyring 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 83786 ']' 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 83786 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 83786 ']' 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 83786 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83786 00:23:04.036 killing process with pid 83786 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83786' 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 83786 00:23:04.036 09:24:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 83786 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.973 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.974 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.974 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:23:04.974 00:23:04.974 real 0m11.090s 00:23:04.974 user 0m20.756s 00:23:04.974 sys 0m2.113s 00:23:04.974 ************************************ 00:23:04.974 END TEST nvmf_host_discovery 00:23:04.974 ************************************ 00:23:04.974 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.974 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.974 09:24:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:04.974 09:24:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:04.974 09:24:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.974 09:24:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.234 ************************************ 00:23:05.234 START TEST nvmf_host_multipath_status 00:23:05.234 ************************************ 00:23:05.234 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:05.234 * Looking for test storage... 00:23:05.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:05.234 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:05.234 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:23:05.234 09:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:05.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.234 --rc genhtml_branch_coverage=1 00:23:05.234 --rc genhtml_function_coverage=1 00:23:05.234 --rc genhtml_legend=1 00:23:05.234 --rc geninfo_all_blocks=1 00:23:05.234 --rc geninfo_unexecuted_blocks=1 00:23:05.234 00:23:05.234 ' 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:05.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.234 --rc genhtml_branch_coverage=1 00:23:05.234 --rc genhtml_function_coverage=1 00:23:05.234 --rc genhtml_legend=1 00:23:05.234 --rc geninfo_all_blocks=1 00:23:05.234 --rc geninfo_unexecuted_blocks=1 00:23:05.234 00:23:05.234 ' 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:05.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.234 --rc genhtml_branch_coverage=1 00:23:05.234 --rc genhtml_function_coverage=1 00:23:05.234 --rc genhtml_legend=1 00:23:05.234 --rc geninfo_all_blocks=1 00:23:05.234 --rc geninfo_unexecuted_blocks=1 00:23:05.234 00:23:05.234 ' 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:05.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.234 --rc genhtml_branch_coverage=1 00:23:05.234 --rc genhtml_function_coverage=1 00:23:05.234 --rc genhtml_legend=1 00:23:05.234 --rc geninfo_all_blocks=1 00:23:05.234 --rc geninfo_unexecuted_blocks=1 00:23:05.234 00:23:05.234 ' 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.234 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.235 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:05.235 Cannot find device "nvmf_init_br" 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:05.235 Cannot find device "nvmf_init_br2" 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:23:05.235 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:05.495 Cannot find device "nvmf_tgt_br" 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:05.495 Cannot find device "nvmf_tgt_br2" 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:05.495 Cannot find device "nvmf_init_br" 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:05.495 Cannot find device "nvmf_init_br2" 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:05.495 Cannot find device "nvmf_tgt_br" 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:05.495 Cannot find device "nvmf_tgt_br2" 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:05.495 Cannot find device "nvmf_br" 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:05.495 Cannot find device "nvmf_init_if" 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:05.495 Cannot find device "nvmf_init_if2" 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:05.495 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:05.754 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:05.754 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:05.754 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:05.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:05.755 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:23:05.755 00:23:05.755 --- 10.0.0.3 ping statistics --- 00:23:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.755 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:05.755 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:05.755 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:23:05.755 00:23:05.755 --- 10.0.0.4 ping statistics --- 00:23:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.755 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:05.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:23:05.755 00:23:05.755 --- 10.0.0.1 ping statistics --- 00:23:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.755 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:05.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:23:05.755 00:23:05.755 --- 10.0.0.2 ping statistics --- 00:23:05.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.755 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:05.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=84332 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 84332 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 84332 ']' 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.755 09:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:06.014 [2024-12-13 09:24:59.654586] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:06.014 [2024-12-13 09:24:59.654750] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.014 [2024-12-13 09:24:59.834270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:06.273 [2024-12-13 09:24:59.916691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.273 [2024-12-13 09:24:59.916983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.273 [2024-12-13 09:24:59.917150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.273 [2024-12-13 09:24:59.917305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.273 [2024-12-13 09:24:59.917363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.273 [2024-12-13 09:24:59.919070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.273 [2024-12-13 09:24:59.919084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.273 [2024-12-13 09:25:00.081552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:06.841 09:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.841 09:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:06.841 09:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.841 09:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.841 09:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:06.841 09:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.841 09:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=84332 00:23:06.841 09:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:07.100 [2024-12-13 09:25:00.944453] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.100 09:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:07.669 Malloc0 00:23:07.669 09:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:07.927 09:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:08.187 09:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:08.187 [2024-12-13 09:25:02.028961] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:08.187 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:08.446 [2024-12-13 09:25:02.257046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:08.446 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=84388 00:23:08.446 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:08.446 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.446 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 84388 /var/tmp/bdevperf.sock 00:23:08.446 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 84388 ']' 00:23:08.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.446 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.446 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.446 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.446 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.446 09:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:09.824 09:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.824 09:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:09.824 09:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:09.824 09:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:10.083 Nvme0n1 00:23:10.084 09:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:10.343 Nvme0n1 00:23:10.343 09:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:10.343 09:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:12.878 09:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:12.878 09:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:12.878 09:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:12.878 09:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:14.257 09:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:14.257 09:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:14.257 09:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.257 09:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:14.257 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.257 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:14.257 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.257 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:14.516 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.516 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:14.516 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.516 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.774 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.774 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.774 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.774 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:15.034 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.034 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:15.034 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.034 09:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:15.293 09:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.293 09:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:15.293 09:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.293 09:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:15.552 09:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.552 09:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:15.552 09:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:15.811 09:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:16.071 09:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:17.039 09:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:17.039 09:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:17.039 09:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.039 09:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:17.298 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:17.298 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:17.298 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.298 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:17.867 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.867 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:17.867 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.867 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:17.867 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.867 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.867 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.867 09:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.436 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.436 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:18.436 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.436 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:18.436 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.436 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:18.436 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.436 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:18.695 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.695 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:18.695 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:18.954 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:19.213 09:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:20.151 09:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:20.151 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:20.151 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.151 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:20.410 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.410 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:20.410 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.410 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:20.669 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.669 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:20.669 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.669 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:20.928 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.928 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:20.928 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.928 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:21.187 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.187 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:21.187 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.187 09:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:21.446 09:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.446 09:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:21.446 09:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.446 09:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:21.705 09:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.705 09:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:21.705 09:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:21.964 09:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:22.223 09:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:23.159 09:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:23.159 09:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:23.159 09:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.159 09:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:23.419 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.419 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:23.419 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.419 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:23.678 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.678 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:23.678 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.678 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:23.938 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.938 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:23.938 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.938 09:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:24.506 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.506 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:24.506 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.506 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:24.506 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.506 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:24.506 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.506 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:24.765 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:24.765 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:24.765 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:25.024 09:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:25.283 09:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:26.220 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:26.220 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:26.220 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.220 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:26.480 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:26.480 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:26.480 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.480 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:26.739 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:26.739 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:26.739 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.739 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:26.999 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.999 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:26.999 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.999 09:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:27.258 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.258 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:27.258 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.517 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:27.517 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:27.517 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:27.517 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.517 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:27.777 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:27.777 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:27.777 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:28.035 09:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:28.294 09:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:29.231 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:29.231 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:29.231 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.231 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:29.491 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:29.491 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:29.491 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.491 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:29.750 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.750 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:29.750 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.750 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.009 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.009 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.009 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:30.009 09:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.268 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.268 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:30.268 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.268 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.527 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.527 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:30.527 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.527 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:30.787 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.787 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:31.045 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:31.045 09:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:31.307 09:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:31.570 09:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:32.948 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:32.948 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:32.948 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.948 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:32.948 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.948 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:32.948 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:32.948 09:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.207 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.207 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:33.207 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.207 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:33.466 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.466 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:33.466 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.466 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:34.035 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.035 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:34.035 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:34.035 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.035 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.035 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:34.035 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.035 09:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:34.295 09:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.295 09:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:34.295 09:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:34.554 09:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:34.814 09:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:36.206 09:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:36.206 09:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:36.206 09:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.206 09:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:36.206 09:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.206 09:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:36.206 09:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.206 09:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:36.494 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.494 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:36.494 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.494 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:36.765 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.765 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:36.765 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.765 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:37.024 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.024 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:37.024 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:37.024 09:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.283 09:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.283 09:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:37.283 09:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.283 09:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.543 09:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.543 09:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:37.543 09:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:37.802 09:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:38.061 09:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:38.999 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:38.999 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:38.999 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.999 09:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:39.258 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.258 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:39.258 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.258 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.518 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.518 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.518 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.518 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:40.086 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.086 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:40.086 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.086 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:40.086 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.086 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:40.086 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.086 09:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:40.345 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.345 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:40.345 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.345 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.604 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.604 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:40.604 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:40.863 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:41.123 09:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:42.063 09:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:42.063 09:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:42.063 09:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.063 09:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:42.322 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.322 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:42.322 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.322 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:42.580 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:42.580 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:42.581 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:42.581 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.839 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.839 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:42.839 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.839 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:43.098 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.098 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:43.098 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.098 09:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:43.357 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.357 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:43.357 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:43.357 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 84388 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 84388 ']' 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 84388 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84388 00:23:43.617 killing process with pid 84388 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84388' 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 84388 00:23:43.617 09:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 84388 00:23:43.617 { 00:23:43.617 "results": [ 00:23:43.617 { 00:23:43.617 "job": "Nvme0n1", 00:23:43.617 "core_mask": "0x4", 00:23:43.617 "workload": "verify", 00:23:43.617 "status": "terminated", 00:23:43.617 "verify_range": { 00:23:43.617 "start": 0, 00:23:43.617 "length": 16384 00:23:43.617 }, 00:23:43.617 "queue_depth": 128, 00:23:43.617 "io_size": 4096, 00:23:43.617 "runtime": 33.139655, 00:23:43.617 "iops": 8112.15445664718, 00:23:43.617 "mibps": 31.688103346278048, 00:23:43.617 "io_failed": 0, 00:23:43.617 "io_timeout": 0, 00:23:43.617 "avg_latency_us": 15747.419768305821, 00:23:43.617 "min_latency_us": 213.17818181818183, 00:23:43.617 "max_latency_us": 4026531.84 00:23:43.617 } 00:23:43.617 ], 00:23:43.617 "core_count": 1 00:23:43.617 } 00:23:44.557 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 84388 00:23:44.557 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:44.557 [2024-12-13 09:25:02.381602] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:44.557 [2024-12-13 09:25:02.382232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84388 ] 00:23:44.557 [2024-12-13 09:25:02.569513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.557 [2024-12-13 09:25:02.693742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.557 [2024-12-13 09:25:02.849664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:44.557 Running I/O for 90 seconds... 00:23:44.557 8473.00 IOPS, 33.10 MiB/s [2024-12-13T09:25:38.447Z] 8624.50 IOPS, 33.69 MiB/s [2024-12-13T09:25:38.447Z] 8645.67 IOPS, 33.77 MiB/s [2024-12-13T09:25:38.447Z] 8622.25 IOPS, 33.68 MiB/s [2024-12-13T09:25:38.447Z] 8590.60 IOPS, 33.56 MiB/s [2024-12-13T09:25:38.447Z] 8571.00 IOPS, 33.48 MiB/s [2024-12-13T09:25:38.447Z] 8576.29 IOPS, 33.50 MiB/s [2024-12-13T09:25:38.447Z] 8560.25 IOPS, 33.44 MiB/s [2024-12-13T09:25:38.447Z] 8567.00 IOPS, 33.46 MiB/s [2024-12-13T09:25:38.447Z] 8567.10 IOPS, 33.47 MiB/s [2024-12-13T09:25:38.447Z] 8553.91 IOPS, 33.41 MiB/s [2024-12-13T09:25:38.447Z] 8557.92 IOPS, 33.43 MiB/s [2024-12-13T09:25:38.447Z] 8572.23 IOPS, 33.49 MiB/s [2024-12-13T09:25:38.447Z] 8563.36 IOPS, 33.45 MiB/s [2024-12-13T09:25:38.447Z] [2024-12-13 09:25:18.805782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.805870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.805962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.805994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.557 [2024-12-13 09:25:18.806551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:44.557 [2024-12-13 09:25:18.806576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.806595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.806620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.806641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.806667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.806687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.806713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.806732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.806759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.806778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.806804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.806823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.806893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.806915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.806941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.806961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.807022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.807069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.807127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.807962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.807988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.558 [2024-12-13 09:25:18.808007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:44.558 [2024-12-13 09:25:18.808615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.558 [2024-12-13 09:25:18.808635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.808660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.808680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.808706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.808726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.808752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.808771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.808796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.808816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.808842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.808862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.808898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.808918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.808960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.808980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.809241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.809335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.809389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.809438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.809484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.809535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.809594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.809655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.809956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.809976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.559 [2024-12-13 09:25:18.810021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.559 [2024-12-13 09:25:18.810606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:44.559 [2024-12-13 09:25:18.810632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.810652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.810677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.810696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.810737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.810767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.810795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.810815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.810868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.810891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.810919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.810939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.810966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.810987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.811035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.811083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.811146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.811192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.811252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.811297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.811361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.811410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.811467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.811512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.811559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.811585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.811618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.812356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.812423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.812477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.812531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.812582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.812635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.812687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.812740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:18.812829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.812884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.812939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.812972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.812992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.813025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.813045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.813077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.813098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.813130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.813151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.813183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.813204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:18.813237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.560 [2024-12-13 09:25:18.813257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:44.560 8254.80 IOPS, 32.25 MiB/s [2024-12-13T09:25:38.450Z] 7738.88 IOPS, 30.23 MiB/s [2024-12-13T09:25:38.450Z] 7283.65 IOPS, 28.45 MiB/s [2024-12-13T09:25:38.450Z] 6879.00 IOPS, 26.87 MiB/s [2024-12-13T09:25:38.450Z] 6757.84 IOPS, 26.40 MiB/s [2024-12-13T09:25:38.450Z] 6840.75 IOPS, 26.72 MiB/s [2024-12-13T09:25:38.450Z] 6944.33 IOPS, 27.13 MiB/s [2024-12-13T09:25:38.450Z] 7175.23 IOPS, 28.03 MiB/s [2024-12-13T09:25:38.450Z] 7373.09 IOPS, 28.80 MiB/s [2024-12-13T09:25:38.450Z] 7553.17 IOPS, 29.50 MiB/s [2024-12-13T09:25:38.450Z] 7602.32 IOPS, 29.70 MiB/s [2024-12-13T09:25:38.450Z] 7639.46 IOPS, 29.84 MiB/s [2024-12-13T09:25:38.450Z] 7671.48 IOPS, 29.97 MiB/s [2024-12-13T09:25:38.450Z] 7761.68 IOPS, 30.32 MiB/s [2024-12-13T09:25:38.450Z] 7904.48 IOPS, 30.88 MiB/s [2024-12-13T09:25:38.450Z] 8026.10 IOPS, 31.35 MiB/s [2024-12-13T09:25:38.450Z] [2024-12-13 09:25:34.898425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.560 [2024-12-13 09:25:34.898513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:44.560 [2024-12-13 09:25:34.898587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.898633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.898666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.898688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.898715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.898735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.898762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.898782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.898809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.898830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.898886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.898907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.898934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.898954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.898981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.899001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.899049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.899096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.899144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.899205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.899269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.899350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.899400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.899448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.899499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.899547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.899596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.899642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.899689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.899737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.899784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.899831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.899879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.899937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.899964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.899984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.900010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.900031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.900057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.900078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.900105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.900127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.900154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.900175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.900202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.561 [2024-12-13 09:25:34.900222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.900249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.561 [2024-12-13 09:25:34.900270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:44.561 [2024-12-13 09:25:34.900327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.900350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.900400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.900448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.900496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.900554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.900605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.900656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.900719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.900766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.900813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.900860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.900907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.900954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.900981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.901252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.901299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.901381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.901810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.901870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.901918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.901945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.901966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.903779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.903816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.903852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.903875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.903902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.903923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.903949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.903969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.903995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.562 [2024-12-13 09:25:34.904015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.904041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.904062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.904089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.904109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.904134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.562 [2024-12-13 09:25:34.904154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:44.562 [2024-12-13 09:25:34.904180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.563 [2024-12-13 09:25:34.904200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:44.563 [2024-12-13 09:25:34.904241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.563 [2024-12-13 09:25:34.904262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:44.563 [2024-12-13 09:25:34.904318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.563 [2024-12-13 09:25:34.904343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:44.563 8096.06 IOPS, 31.63 MiB/s [2024-12-13T09:25:38.453Z] 8108.31 IOPS, 31.67 MiB/s [2024-12-13T09:25:38.453Z] 8113.27 IOPS, 31.69 MiB/s [2024-12-13T09:25:38.453Z] Received shutdown signal, test time was about 33.140424 seconds 00:23:44.563 00:23:44.563 Latency(us) 00:23:44.563 [2024-12-13T09:25:38.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.563 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:44.563 Verification LBA range: start 0x0 length 0x4000 00:23:44.563 Nvme0n1 : 33.14 8112.15 31.69 0.00 0.00 15747.42 213.18 4026531.84 00:23:44.563 [2024-12-13T09:25:38.453Z] =================================================================================================================== 00:23:44.563 [2024-12-13T09:25:38.453Z] Total : 8112.15 31.69 0.00 0.00 15747.42 213.18 4026531.84 00:23:44.563 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.822 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:44.822 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:44.822 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:44.822 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.822 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:44.822 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.822 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:44.822 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.822 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.822 rmmod nvme_tcp 00:23:44.822 rmmod nvme_fabrics 00:23:44.822 rmmod nvme_keyring 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 84332 ']' 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 84332 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 84332 ']' 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 84332 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84332 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84332' 00:23:45.081 killing process with pid 84332 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 84332 00:23:45.081 09:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 84332 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:46.019 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:23:46.278 00:23:46.278 real 0m41.079s 00:23:46.278 user 2m11.493s 00:23:46.278 sys 0m10.330s 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:46.278 ************************************ 00:23:46.278 END TEST nvmf_host_multipath_status 00:23:46.278 ************************************ 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.278 ************************************ 00:23:46.278 START TEST nvmf_discovery_remove_ifc 00:23:46.278 ************************************ 00:23:46.278 09:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:46.278 * Looking for test storage... 00:23:46.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:46.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.278 --rc genhtml_branch_coverage=1 00:23:46.278 --rc genhtml_function_coverage=1 00:23:46.278 --rc genhtml_legend=1 00:23:46.278 --rc geninfo_all_blocks=1 00:23:46.278 --rc geninfo_unexecuted_blocks=1 00:23:46.278 00:23:46.278 ' 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:46.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.278 --rc genhtml_branch_coverage=1 00:23:46.278 --rc genhtml_function_coverage=1 00:23:46.278 --rc genhtml_legend=1 00:23:46.278 --rc geninfo_all_blocks=1 00:23:46.278 --rc geninfo_unexecuted_blocks=1 00:23:46.278 00:23:46.278 ' 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:46.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.278 --rc genhtml_branch_coverage=1 00:23:46.278 --rc genhtml_function_coverage=1 00:23:46.278 --rc genhtml_legend=1 00:23:46.278 --rc geninfo_all_blocks=1 00:23:46.278 --rc geninfo_unexecuted_blocks=1 00:23:46.278 00:23:46.278 ' 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:46.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.278 --rc genhtml_branch_coverage=1 00:23:46.278 --rc genhtml_function_coverage=1 00:23:46.278 --rc genhtml_legend=1 00:23:46.278 --rc geninfo_all_blocks=1 00:23:46.278 --rc geninfo_unexecuted_blocks=1 00:23:46.278 00:23:46.278 ' 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.278 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.538 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:46.538 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:46.539 Cannot find device "nvmf_init_br" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:46.539 Cannot find device "nvmf_init_br2" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:46.539 Cannot find device "nvmf_tgt_br" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:46.539 Cannot find device "nvmf_tgt_br2" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:46.539 Cannot find device "nvmf_init_br" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:46.539 Cannot find device "nvmf_init_br2" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:46.539 Cannot find device "nvmf_tgt_br" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:46.539 Cannot find device "nvmf_tgt_br2" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:46.539 Cannot find device "nvmf_br" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:46.539 Cannot find device "nvmf_init_if" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:46.539 Cannot find device "nvmf_init_if2" 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:46.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:46.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:46.539 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:46.799 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:46.799 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:23:46.799 00:23:46.799 --- 10.0.0.3 ping statistics --- 00:23:46.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.799 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:46.799 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:46.799 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:23:46.799 00:23:46.799 --- 10.0.0.4 ping statistics --- 00:23:46.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.799 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:46.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:46.799 00:23:46.799 --- 10.0.0.1 ping statistics --- 00:23:46.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.799 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:46.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:23:46.799 00:23:46.799 --- 10.0.0.2 ping statistics --- 00:23:46.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.799 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=85224 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 85224 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 85224 ']' 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.799 09:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.058 [2024-12-13 09:25:40.729527] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:47.058 [2024-12-13 09:25:40.729924] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.058 [2024-12-13 09:25:40.918365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.317 [2024-12-13 09:25:41.042452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.317 [2024-12-13 09:25:41.042520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.317 [2024-12-13 09:25:41.042544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.317 [2024-12-13 09:25:41.042573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.317 [2024-12-13 09:25:41.042590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.317 [2024-12-13 09:25:41.044031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.575 [2024-12-13 09:25:41.245450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:47.834 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.834 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:47.834 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:47.834 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:47.834 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.093 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.093 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:48.093 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.093 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.093 [2024-12-13 09:25:41.763031] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.093 [2024-12-13 09:25:41.771250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:48.093 null0 00:23:48.093 [2024-12-13 09:25:41.803161] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:48.093 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.093 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=85256 00:23:48.093 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:48.093 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 85256 /tmp/host.sock 00:23:48.094 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 85256 ']' 00:23:48.094 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:48.094 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.094 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:48.094 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:48.094 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.094 09:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.094 [2024-12-13 09:25:41.940869] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:48.094 [2024-12-13 09:25:41.941039] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85256 ] 00:23:48.353 [2024-12-13 09:25:42.120822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.612 [2024-12-13 09:25:42.245757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.180 09:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.180 09:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:49.180 09:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:49.180 09:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:49.180 09:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.180 09:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:49.180 09:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.180 09:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:49.180 09:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.180 09:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:49.180 [2024-12-13 09:25:43.050017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:49.439 09:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.439 09:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:49.439 09:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.439 09:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.375 [2024-12-13 09:25:44.157402] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:50.375 [2024-12-13 09:25:44.157458] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:50.375 [2024-12-13 09:25:44.157493] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:50.375 [2024-12-13 09:25:44.163469] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:23:50.375 [2024-12-13 09:25:44.226040] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:23:50.375 [2024-12-13 09:25:44.227473] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:23:50.375 [2024-12-13 09:25:44.229437] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:50.375 [2024-12-13 09:25:44.229527] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:50.375 [2024-12-13 09:25:44.229588] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:50.375 [2024-12-13 09:25:44.229614] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:50.375 [2024-12-13 09:25:44.229647] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:50.375 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.375 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:50.375 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.375 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.375 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.375 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.375 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.375 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.375 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.375 [2024-12-13 09:25:44.236715] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:23:50.375 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:50.634 09:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:51.571 09:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:51.571 09:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.571 09:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:51.571 09:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.571 09:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:51.571 09:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.571 09:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:51.571 09:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.571 09:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:51.571 09:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:52.948 09:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:52.948 09:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.948 09:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:52.948 09:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.948 09:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:52.948 09:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:52.948 09:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:52.948 09:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.949 09:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:52.949 09:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:53.885 09:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:53.885 09:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.885 09:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.885 09:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:53.885 09:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:53.885 09:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:53.885 09:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:53.885 09:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.885 09:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:53.885 09:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:54.821 09:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:54.821 09:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.821 09:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:54.821 09:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.821 09:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.821 09:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:54.821 09:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:54.821 09:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.821 09:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:54.821 09:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:55.757 09:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:55.757 09:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.757 09:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:55.757 09:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.757 09:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:55.757 09:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.757 09:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:55.757 09:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.757 09:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:55.757 09:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:56.016 [2024-12-13 09:25:49.657588] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:56.016 [2024-12-13 09:25:49.657682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.016 [2024-12-13 09:25:49.657730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.016 [2024-12-13 09:25:49.657748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.016 [2024-12-13 09:25:49.657760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.016 [2024-12-13 09:25:49.657771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.016 [2024-12-13 09:25:49.657783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.016 [2024-12-13 09:25:49.657795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.016 [2024-12-13 09:25:49.657838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.016 [2024-12-13 09:25:49.657867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.016 [2024-12-13 09:25:49.657880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.016 [2024-12-13 09:25:49.657892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:56.016 [2024-12-13 09:25:49.667580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:56.016 [2024-12-13 09:25:49.677599] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:56.016 [2024-12-13 09:25:49.677650] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:56.016 [2024-12-13 09:25:49.677661] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.016 [2024-12-13 09:25:49.677670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.016 [2024-12-13 09:25:49.677764] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:56.988 09:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:56.988 09:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.988 09:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.988 09:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:56.988 09:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:56.988 09:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:56.988 09:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:56.988 [2024-12-13 09:25:50.715381] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:23:56.988 [2024-12-13 09:25:50.715517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:23:56.988 [2024-12-13 09:25:50.715555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:56.988 [2024-12-13 09:25:50.715628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:56.988 [2024-12-13 09:25:50.716771] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:56.988 [2024-12-13 09:25:50.717068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:56.988 [2024-12-13 09:25:50.717145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:56.988 [2024-12-13 09:25:50.717176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:56.988 [2024-12-13 09:25:50.717202] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:56.988 [2024-12-13 09:25:50.717221] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:56.988 [2024-12-13 09:25:50.717237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:56.988 [2024-12-13 09:25:50.717263] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:56.988 [2024-12-13 09:25:50.717343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:56.988 09:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.989 09:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:56.989 09:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:57.924 [2024-12-13 09:25:51.717450] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:57.924 [2024-12-13 09:25:51.717510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:57.924 [2024-12-13 09:25:51.717538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:57.924 [2024-12-13 09:25:51.717567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:57.924 [2024-12-13 09:25:51.717579] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:57.924 [2024-12-13 09:25:51.717592] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:57.924 [2024-12-13 09:25:51.717601] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:57.924 [2024-12-13 09:25:51.717609] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:57.924 [2024-12-13 09:25:51.717663] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:23:57.924 [2024-12-13 09:25:51.717712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.924 [2024-12-13 09:25:51.717730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.924 [2024-12-13 09:25:51.717753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.924 [2024-12-13 09:25:51.717780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.924 [2024-12-13 09:25:51.717792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.924 [2024-12-13 09:25:51.717819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.924 [2024-12-13 09:25:51.717832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.924 [2024-12-13 09:25:51.717843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.924 [2024-12-13 09:25:51.717856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.924 [2024-12-13 09:25:51.717881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.924 [2024-12-13 09:25:51.717893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:57.924 [2024-12-13 09:25:51.718401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:23:57.924 [2024-12-13 09:25:51.719430] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:57.924 [2024-12-13 09:25:51.719478] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:57.924 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:58.183 09:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:59.118 09:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:59.118 09:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.118 09:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:59.118 09:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.118 09:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.118 09:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:59.118 09:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:59.118 09:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.118 09:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:59.118 09:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:00.055 [2024-12-13 09:25:53.732602] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:00.055 [2024-12-13 09:25:53.732636] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:00.055 [2024-12-13 09:25:53.732699] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:00.055 [2024-12-13 09:25:53.738665] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:24:00.055 [2024-12-13 09:25:53.793245] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:24:00.055 [2024-12-13 09:25:53.794455] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500002c180:1 started. 00:24:00.055 [2024-12-13 09:25:53.796424] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:00.055 [2024-12-13 09:25:53.796502] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:00.055 [2024-12-13 09:25:53.796554] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:00.055 [2024-12-13 09:25:53.796579] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:24:00.055 [2024-12-13 09:25:53.796593] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:00.055 [2024-12-13 09:25:53.801518] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:24:00.055 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:00.055 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.055 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:00.055 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.055 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:00.055 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:00.055 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.314 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.314 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:00.314 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:00.314 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 85256 00:24:00.314 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 85256 ']' 00:24:00.314 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 85256 00:24:00.314 09:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:00.314 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.314 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85256 00:24:00.314 killing process with pid 85256 00:24:00.314 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.314 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.314 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85256' 00:24:00.314 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 85256 00:24:00.314 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 85256 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.251 rmmod nvme_tcp 00:24:01.251 rmmod nvme_fabrics 00:24:01.251 rmmod nvme_keyring 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 85224 ']' 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 85224 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 85224 ']' 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 85224 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.251 09:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85224 00:24:01.251 killing process with pid 85224 00:24:01.251 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.251 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.251 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85224' 00:24:01.251 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 85224 00:24:01.251 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 85224 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:02.188 09:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:02.188 09:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:02.189 09:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.189 09:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.189 09:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.189 09:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:24:02.189 00:24:02.189 real 0m16.060s 00:24:02.189 user 0m27.170s 00:24:02.189 sys 0m2.581s 00:24:02.189 09:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.189 ************************************ 00:24:02.189 END TEST nvmf_discovery_remove_ifc 00:24:02.189 ************************************ 00:24:02.189 09:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.449 ************************************ 00:24:02.449 START TEST nvmf_identify_kernel_target 00:24:02.449 ************************************ 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:02.449 * Looking for test storage... 00:24:02.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:02.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.449 --rc genhtml_branch_coverage=1 00:24:02.449 --rc genhtml_function_coverage=1 00:24:02.449 --rc genhtml_legend=1 00:24:02.449 --rc geninfo_all_blocks=1 00:24:02.449 --rc geninfo_unexecuted_blocks=1 00:24:02.449 00:24:02.449 ' 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:02.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.449 --rc genhtml_branch_coverage=1 00:24:02.449 --rc genhtml_function_coverage=1 00:24:02.449 --rc genhtml_legend=1 00:24:02.449 --rc geninfo_all_blocks=1 00:24:02.449 --rc geninfo_unexecuted_blocks=1 00:24:02.449 00:24:02.449 ' 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:02.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.449 --rc genhtml_branch_coverage=1 00:24:02.449 --rc genhtml_function_coverage=1 00:24:02.449 --rc genhtml_legend=1 00:24:02.449 --rc geninfo_all_blocks=1 00:24:02.449 --rc geninfo_unexecuted_blocks=1 00:24:02.449 00:24:02.449 ' 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:02.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.449 --rc genhtml_branch_coverage=1 00:24:02.449 --rc genhtml_function_coverage=1 00:24:02.449 --rc genhtml_legend=1 00:24:02.449 --rc geninfo_all_blocks=1 00:24:02.449 --rc geninfo_unexecuted_blocks=1 00:24:02.449 00:24:02.449 ' 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.449 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:02.450 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:02.450 Cannot find device "nvmf_init_br" 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:24:02.450 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:02.709 Cannot find device "nvmf_init_br2" 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:02.709 Cannot find device "nvmf_tgt_br" 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:02.709 Cannot find device "nvmf_tgt_br2" 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:02.709 Cannot find device "nvmf_init_br" 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:02.709 Cannot find device "nvmf_init_br2" 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:02.709 Cannot find device "nvmf_tgt_br" 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:02.709 Cannot find device "nvmf_tgt_br2" 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:02.709 Cannot find device "nvmf_br" 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:02.709 Cannot find device "nvmf_init_if" 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:02.709 Cannot find device "nvmf_init_if2" 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:02.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:02.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:02.709 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:02.710 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:02.710 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:02.710 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:02.710 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:02.710 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:02.710 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:02.710 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:02.710 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:02.710 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:02.710 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:02.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:02.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:24:02.969 00:24:02.969 --- 10.0.0.3 ping statistics --- 00:24:02.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.969 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:02.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:02.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:24:02.969 00:24:02.969 --- 10.0.0.4 ping statistics --- 00:24:02.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.969 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:02.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:02.969 00:24:02.969 --- 10.0.0.1 ping statistics --- 00:24:02.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.969 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:02.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:24:02.969 00:24:02.969 --- 10.0.0.2 ping statistics --- 00:24:02.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.969 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:02.969 09:25:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:03.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:03.228 Waiting for block devices as requested 00:24:03.487 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:03.487 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:03.487 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:03.487 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:03.487 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:03.487 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:03.487 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:03.487 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:03.487 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:03.487 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:03.487 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:03.487 No valid GPT data, bailing 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:03.747 No valid GPT data, bailing 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:03.747 No valid GPT data, bailing 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:03.747 No valid GPT data, bailing 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:03.747 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:04.006 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -a 10.0.0.1 -t tcp -s 4420 00:24:04.006 00:24:04.006 Discovery Log Number of Records 2, Generation counter 2 00:24:04.006 =====Discovery Log Entry 0====== 00:24:04.006 trtype: tcp 00:24:04.006 adrfam: ipv4 00:24:04.006 subtype: current discovery subsystem 00:24:04.006 treq: not specified, sq flow control disable supported 00:24:04.006 portid: 1 00:24:04.006 trsvcid: 4420 00:24:04.006 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:04.006 traddr: 10.0.0.1 00:24:04.006 eflags: none 00:24:04.006 sectype: none 00:24:04.006 =====Discovery Log Entry 1====== 00:24:04.006 trtype: tcp 00:24:04.006 adrfam: ipv4 00:24:04.006 subtype: nvme subsystem 00:24:04.006 treq: not specified, sq flow control disable supported 00:24:04.006 portid: 1 00:24:04.006 trsvcid: 4420 00:24:04.006 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:04.006 traddr: 10.0.0.1 00:24:04.006 eflags: none 00:24:04.006 sectype: none 00:24:04.007 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:04.007 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:04.266 ===================================================== 00:24:04.266 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:04.266 ===================================================== 00:24:04.266 Controller Capabilities/Features 00:24:04.266 ================================ 00:24:04.266 Vendor ID: 0000 00:24:04.266 Subsystem Vendor ID: 0000 00:24:04.266 Serial Number: e2881826164a2e5717ab 00:24:04.266 Model Number: Linux 00:24:04.266 Firmware Version: 6.8.9-20 00:24:04.266 Recommended Arb Burst: 0 00:24:04.266 IEEE OUI Identifier: 00 00 00 00:24:04.266 Multi-path I/O 00:24:04.266 May have multiple subsystem ports: No 00:24:04.266 May have multiple controllers: No 00:24:04.266 Associated with SR-IOV VF: No 00:24:04.266 Max Data Transfer Size: Unlimited 00:24:04.266 Max Number of Namespaces: 0 00:24:04.266 Max Number of I/O Queues: 1024 00:24:04.266 NVMe Specification Version (VS): 1.3 00:24:04.266 NVMe Specification Version (Identify): 1.3 00:24:04.266 Maximum Queue Entries: 1024 00:24:04.266 Contiguous Queues Required: No 00:24:04.266 Arbitration Mechanisms Supported 00:24:04.266 Weighted Round Robin: Not Supported 00:24:04.266 Vendor Specific: Not Supported 00:24:04.266 Reset Timeout: 7500 ms 00:24:04.266 Doorbell Stride: 4 bytes 00:24:04.266 NVM Subsystem Reset: Not Supported 00:24:04.266 Command Sets Supported 00:24:04.266 NVM Command Set: Supported 00:24:04.266 Boot Partition: Not Supported 00:24:04.266 Memory Page Size Minimum: 4096 bytes 00:24:04.266 Memory Page Size Maximum: 4096 bytes 00:24:04.266 Persistent Memory Region: Not Supported 00:24:04.266 Optional Asynchronous Events Supported 00:24:04.266 Namespace Attribute Notices: Not Supported 00:24:04.266 Firmware Activation Notices: Not Supported 00:24:04.266 ANA Change Notices: Not Supported 00:24:04.266 PLE Aggregate Log Change Notices: Not Supported 00:24:04.266 LBA Status Info Alert Notices: Not Supported 00:24:04.266 EGE Aggregate Log Change Notices: Not Supported 00:24:04.266 Normal NVM Subsystem Shutdown event: Not Supported 00:24:04.266 Zone Descriptor Change Notices: Not Supported 00:24:04.266 Discovery Log Change Notices: Supported 00:24:04.266 Controller Attributes 00:24:04.266 128-bit Host Identifier: Not Supported 00:24:04.266 Non-Operational Permissive Mode: Not Supported 00:24:04.266 NVM Sets: Not Supported 00:24:04.266 Read Recovery Levels: Not Supported 00:24:04.266 Endurance Groups: Not Supported 00:24:04.266 Predictable Latency Mode: Not Supported 00:24:04.266 Traffic Based Keep ALive: Not Supported 00:24:04.266 Namespace Granularity: Not Supported 00:24:04.266 SQ Associations: Not Supported 00:24:04.266 UUID List: Not Supported 00:24:04.266 Multi-Domain Subsystem: Not Supported 00:24:04.266 Fixed Capacity Management: Not Supported 00:24:04.266 Variable Capacity Management: Not Supported 00:24:04.266 Delete Endurance Group: Not Supported 00:24:04.266 Delete NVM Set: Not Supported 00:24:04.266 Extended LBA Formats Supported: Not Supported 00:24:04.266 Flexible Data Placement Supported: Not Supported 00:24:04.266 00:24:04.266 Controller Memory Buffer Support 00:24:04.266 ================================ 00:24:04.266 Supported: No 00:24:04.266 00:24:04.266 Persistent Memory Region Support 00:24:04.266 ================================ 00:24:04.266 Supported: No 00:24:04.266 00:24:04.266 Admin Command Set Attributes 00:24:04.266 ============================ 00:24:04.266 Security Send/Receive: Not Supported 00:24:04.266 Format NVM: Not Supported 00:24:04.266 Firmware Activate/Download: Not Supported 00:24:04.266 Namespace Management: Not Supported 00:24:04.266 Device Self-Test: Not Supported 00:24:04.266 Directives: Not Supported 00:24:04.266 NVMe-MI: Not Supported 00:24:04.266 Virtualization Management: Not Supported 00:24:04.266 Doorbell Buffer Config: Not Supported 00:24:04.266 Get LBA Status Capability: Not Supported 00:24:04.266 Command & Feature Lockdown Capability: Not Supported 00:24:04.266 Abort Command Limit: 1 00:24:04.266 Async Event Request Limit: 1 00:24:04.266 Number of Firmware Slots: N/A 00:24:04.266 Firmware Slot 1 Read-Only: N/A 00:24:04.266 Firmware Activation Without Reset: N/A 00:24:04.266 Multiple Update Detection Support: N/A 00:24:04.266 Firmware Update Granularity: No Information Provided 00:24:04.266 Per-Namespace SMART Log: No 00:24:04.266 Asymmetric Namespace Access Log Page: Not Supported 00:24:04.266 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:04.266 Command Effects Log Page: Not Supported 00:24:04.266 Get Log Page Extended Data: Supported 00:24:04.266 Telemetry Log Pages: Not Supported 00:24:04.266 Persistent Event Log Pages: Not Supported 00:24:04.266 Supported Log Pages Log Page: May Support 00:24:04.266 Commands Supported & Effects Log Page: Not Supported 00:24:04.266 Feature Identifiers & Effects Log Page:May Support 00:24:04.266 NVMe-MI Commands & Effects Log Page: May Support 00:24:04.266 Data Area 4 for Telemetry Log: Not Supported 00:24:04.266 Error Log Page Entries Supported: 1 00:24:04.266 Keep Alive: Not Supported 00:24:04.266 00:24:04.266 NVM Command Set Attributes 00:24:04.266 ========================== 00:24:04.266 Submission Queue Entry Size 00:24:04.266 Max: 1 00:24:04.266 Min: 1 00:24:04.266 Completion Queue Entry Size 00:24:04.266 Max: 1 00:24:04.266 Min: 1 00:24:04.266 Number of Namespaces: 0 00:24:04.266 Compare Command: Not Supported 00:24:04.266 Write Uncorrectable Command: Not Supported 00:24:04.266 Dataset Management Command: Not Supported 00:24:04.266 Write Zeroes Command: Not Supported 00:24:04.266 Set Features Save Field: Not Supported 00:24:04.266 Reservations: Not Supported 00:24:04.266 Timestamp: Not Supported 00:24:04.266 Copy: Not Supported 00:24:04.266 Volatile Write Cache: Not Present 00:24:04.266 Atomic Write Unit (Normal): 1 00:24:04.266 Atomic Write Unit (PFail): 1 00:24:04.266 Atomic Compare & Write Unit: 1 00:24:04.266 Fused Compare & Write: Not Supported 00:24:04.266 Scatter-Gather List 00:24:04.266 SGL Command Set: Supported 00:24:04.266 SGL Keyed: Not Supported 00:24:04.266 SGL Bit Bucket Descriptor: Not Supported 00:24:04.267 SGL Metadata Pointer: Not Supported 00:24:04.267 Oversized SGL: Not Supported 00:24:04.267 SGL Metadata Address: Not Supported 00:24:04.267 SGL Offset: Supported 00:24:04.267 Transport SGL Data Block: Not Supported 00:24:04.267 Replay Protected Memory Block: Not Supported 00:24:04.267 00:24:04.267 Firmware Slot Information 00:24:04.267 ========================= 00:24:04.267 Active slot: 0 00:24:04.267 00:24:04.267 00:24:04.267 Error Log 00:24:04.267 ========= 00:24:04.267 00:24:04.267 Active Namespaces 00:24:04.267 ================= 00:24:04.267 Discovery Log Page 00:24:04.267 ================== 00:24:04.267 Generation Counter: 2 00:24:04.267 Number of Records: 2 00:24:04.267 Record Format: 0 00:24:04.267 00:24:04.267 Discovery Log Entry 0 00:24:04.267 ---------------------- 00:24:04.267 Transport Type: 3 (TCP) 00:24:04.267 Address Family: 1 (IPv4) 00:24:04.267 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:04.267 Entry Flags: 00:24:04.267 Duplicate Returned Information: 0 00:24:04.267 Explicit Persistent Connection Support for Discovery: 0 00:24:04.267 Transport Requirements: 00:24:04.267 Secure Channel: Not Specified 00:24:04.267 Port ID: 1 (0x0001) 00:24:04.267 Controller ID: 65535 (0xffff) 00:24:04.267 Admin Max SQ Size: 32 00:24:04.267 Transport Service Identifier: 4420 00:24:04.267 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:04.267 Transport Address: 10.0.0.1 00:24:04.267 Discovery Log Entry 1 00:24:04.267 ---------------------- 00:24:04.267 Transport Type: 3 (TCP) 00:24:04.267 Address Family: 1 (IPv4) 00:24:04.267 Subsystem Type: 2 (NVM Subsystem) 00:24:04.267 Entry Flags: 00:24:04.267 Duplicate Returned Information: 0 00:24:04.267 Explicit Persistent Connection Support for Discovery: 0 00:24:04.267 Transport Requirements: 00:24:04.267 Secure Channel: Not Specified 00:24:04.267 Port ID: 1 (0x0001) 00:24:04.267 Controller ID: 65535 (0xffff) 00:24:04.267 Admin Max SQ Size: 32 00:24:04.267 Transport Service Identifier: 4420 00:24:04.267 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:04.267 Transport Address: 10.0.0.1 00:24:04.267 09:25:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:04.526 get_feature(0x01) failed 00:24:04.526 get_feature(0x02) failed 00:24:04.526 get_feature(0x04) failed 00:24:04.526 ===================================================== 00:24:04.526 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:04.526 ===================================================== 00:24:04.526 Controller Capabilities/Features 00:24:04.526 ================================ 00:24:04.526 Vendor ID: 0000 00:24:04.526 Subsystem Vendor ID: 0000 00:24:04.526 Serial Number: 2095fe139450b303650a 00:24:04.526 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:04.526 Firmware Version: 6.8.9-20 00:24:04.526 Recommended Arb Burst: 6 00:24:04.526 IEEE OUI Identifier: 00 00 00 00:24:04.526 Multi-path I/O 00:24:04.526 May have multiple subsystem ports: Yes 00:24:04.526 May have multiple controllers: Yes 00:24:04.526 Associated with SR-IOV VF: No 00:24:04.526 Max Data Transfer Size: Unlimited 00:24:04.526 Max Number of Namespaces: 1024 00:24:04.526 Max Number of I/O Queues: 128 00:24:04.526 NVMe Specification Version (VS): 1.3 00:24:04.526 NVMe Specification Version (Identify): 1.3 00:24:04.526 Maximum Queue Entries: 1024 00:24:04.526 Contiguous Queues Required: No 00:24:04.526 Arbitration Mechanisms Supported 00:24:04.526 Weighted Round Robin: Not Supported 00:24:04.526 Vendor Specific: Not Supported 00:24:04.526 Reset Timeout: 7500 ms 00:24:04.526 Doorbell Stride: 4 bytes 00:24:04.526 NVM Subsystem Reset: Not Supported 00:24:04.526 Command Sets Supported 00:24:04.526 NVM Command Set: Supported 00:24:04.526 Boot Partition: Not Supported 00:24:04.526 Memory Page Size Minimum: 4096 bytes 00:24:04.526 Memory Page Size Maximum: 4096 bytes 00:24:04.526 Persistent Memory Region: Not Supported 00:24:04.526 Optional Asynchronous Events Supported 00:24:04.526 Namespace Attribute Notices: Supported 00:24:04.526 Firmware Activation Notices: Not Supported 00:24:04.526 ANA Change Notices: Supported 00:24:04.526 PLE Aggregate Log Change Notices: Not Supported 00:24:04.526 LBA Status Info Alert Notices: Not Supported 00:24:04.526 EGE Aggregate Log Change Notices: Not Supported 00:24:04.526 Normal NVM Subsystem Shutdown event: Not Supported 00:24:04.526 Zone Descriptor Change Notices: Not Supported 00:24:04.526 Discovery Log Change Notices: Not Supported 00:24:04.526 Controller Attributes 00:24:04.526 128-bit Host Identifier: Supported 00:24:04.526 Non-Operational Permissive Mode: Not Supported 00:24:04.526 NVM Sets: Not Supported 00:24:04.526 Read Recovery Levels: Not Supported 00:24:04.526 Endurance Groups: Not Supported 00:24:04.526 Predictable Latency Mode: Not Supported 00:24:04.526 Traffic Based Keep ALive: Supported 00:24:04.526 Namespace Granularity: Not Supported 00:24:04.526 SQ Associations: Not Supported 00:24:04.526 UUID List: Not Supported 00:24:04.526 Multi-Domain Subsystem: Not Supported 00:24:04.526 Fixed Capacity Management: Not Supported 00:24:04.526 Variable Capacity Management: Not Supported 00:24:04.526 Delete Endurance Group: Not Supported 00:24:04.526 Delete NVM Set: Not Supported 00:24:04.526 Extended LBA Formats Supported: Not Supported 00:24:04.526 Flexible Data Placement Supported: Not Supported 00:24:04.526 00:24:04.526 Controller Memory Buffer Support 00:24:04.526 ================================ 00:24:04.526 Supported: No 00:24:04.526 00:24:04.526 Persistent Memory Region Support 00:24:04.526 ================================ 00:24:04.526 Supported: No 00:24:04.526 00:24:04.526 Admin Command Set Attributes 00:24:04.526 ============================ 00:24:04.526 Security Send/Receive: Not Supported 00:24:04.526 Format NVM: Not Supported 00:24:04.526 Firmware Activate/Download: Not Supported 00:24:04.526 Namespace Management: Not Supported 00:24:04.526 Device Self-Test: Not Supported 00:24:04.526 Directives: Not Supported 00:24:04.526 NVMe-MI: Not Supported 00:24:04.526 Virtualization Management: Not Supported 00:24:04.526 Doorbell Buffer Config: Not Supported 00:24:04.526 Get LBA Status Capability: Not Supported 00:24:04.526 Command & Feature Lockdown Capability: Not Supported 00:24:04.526 Abort Command Limit: 4 00:24:04.526 Async Event Request Limit: 4 00:24:04.526 Number of Firmware Slots: N/A 00:24:04.527 Firmware Slot 1 Read-Only: N/A 00:24:04.527 Firmware Activation Without Reset: N/A 00:24:04.527 Multiple Update Detection Support: N/A 00:24:04.527 Firmware Update Granularity: No Information Provided 00:24:04.527 Per-Namespace SMART Log: Yes 00:24:04.527 Asymmetric Namespace Access Log Page: Supported 00:24:04.527 ANA Transition Time : 10 sec 00:24:04.527 00:24:04.527 Asymmetric Namespace Access Capabilities 00:24:04.527 ANA Optimized State : Supported 00:24:04.527 ANA Non-Optimized State : Supported 00:24:04.527 ANA Inaccessible State : Supported 00:24:04.527 ANA Persistent Loss State : Supported 00:24:04.527 ANA Change State : Supported 00:24:04.527 ANAGRPID is not changed : No 00:24:04.527 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:04.527 00:24:04.527 ANA Group Identifier Maximum : 128 00:24:04.527 Number of ANA Group Identifiers : 128 00:24:04.527 Max Number of Allowed Namespaces : 1024 00:24:04.527 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:04.527 Command Effects Log Page: Supported 00:24:04.527 Get Log Page Extended Data: Supported 00:24:04.527 Telemetry Log Pages: Not Supported 00:24:04.527 Persistent Event Log Pages: Not Supported 00:24:04.527 Supported Log Pages Log Page: May Support 00:24:04.527 Commands Supported & Effects Log Page: Not Supported 00:24:04.527 Feature Identifiers & Effects Log Page:May Support 00:24:04.527 NVMe-MI Commands & Effects Log Page: May Support 00:24:04.527 Data Area 4 for Telemetry Log: Not Supported 00:24:04.527 Error Log Page Entries Supported: 128 00:24:04.527 Keep Alive: Supported 00:24:04.527 Keep Alive Granularity: 1000 ms 00:24:04.527 00:24:04.527 NVM Command Set Attributes 00:24:04.527 ========================== 00:24:04.527 Submission Queue Entry Size 00:24:04.527 Max: 64 00:24:04.527 Min: 64 00:24:04.527 Completion Queue Entry Size 00:24:04.527 Max: 16 00:24:04.527 Min: 16 00:24:04.527 Number of Namespaces: 1024 00:24:04.527 Compare Command: Not Supported 00:24:04.527 Write Uncorrectable Command: Not Supported 00:24:04.527 Dataset Management Command: Supported 00:24:04.527 Write Zeroes Command: Supported 00:24:04.527 Set Features Save Field: Not Supported 00:24:04.527 Reservations: Not Supported 00:24:04.527 Timestamp: Not Supported 00:24:04.527 Copy: Not Supported 00:24:04.527 Volatile Write Cache: Present 00:24:04.527 Atomic Write Unit (Normal): 1 00:24:04.527 Atomic Write Unit (PFail): 1 00:24:04.527 Atomic Compare & Write Unit: 1 00:24:04.527 Fused Compare & Write: Not Supported 00:24:04.527 Scatter-Gather List 00:24:04.527 SGL Command Set: Supported 00:24:04.527 SGL Keyed: Not Supported 00:24:04.527 SGL Bit Bucket Descriptor: Not Supported 00:24:04.527 SGL Metadata Pointer: Not Supported 00:24:04.527 Oversized SGL: Not Supported 00:24:04.527 SGL Metadata Address: Not Supported 00:24:04.527 SGL Offset: Supported 00:24:04.527 Transport SGL Data Block: Not Supported 00:24:04.527 Replay Protected Memory Block: Not Supported 00:24:04.527 00:24:04.527 Firmware Slot Information 00:24:04.527 ========================= 00:24:04.527 Active slot: 0 00:24:04.527 00:24:04.527 Asymmetric Namespace Access 00:24:04.527 =========================== 00:24:04.527 Change Count : 0 00:24:04.527 Number of ANA Group Descriptors : 1 00:24:04.527 ANA Group Descriptor : 0 00:24:04.527 ANA Group ID : 1 00:24:04.527 Number of NSID Values : 1 00:24:04.527 Change Count : 0 00:24:04.527 ANA State : 1 00:24:04.527 Namespace Identifier : 1 00:24:04.527 00:24:04.527 Commands Supported and Effects 00:24:04.527 ============================== 00:24:04.527 Admin Commands 00:24:04.527 -------------- 00:24:04.527 Get Log Page (02h): Supported 00:24:04.527 Identify (06h): Supported 00:24:04.527 Abort (08h): Supported 00:24:04.527 Set Features (09h): Supported 00:24:04.527 Get Features (0Ah): Supported 00:24:04.527 Asynchronous Event Request (0Ch): Supported 00:24:04.527 Keep Alive (18h): Supported 00:24:04.527 I/O Commands 00:24:04.527 ------------ 00:24:04.527 Flush (00h): Supported 00:24:04.527 Write (01h): Supported LBA-Change 00:24:04.527 Read (02h): Supported 00:24:04.527 Write Zeroes (08h): Supported LBA-Change 00:24:04.527 Dataset Management (09h): Supported 00:24:04.527 00:24:04.527 Error Log 00:24:04.527 ========= 00:24:04.527 Entry: 0 00:24:04.527 Error Count: 0x3 00:24:04.527 Submission Queue Id: 0x0 00:24:04.527 Command Id: 0x5 00:24:04.527 Phase Bit: 0 00:24:04.527 Status Code: 0x2 00:24:04.527 Status Code Type: 0x0 00:24:04.527 Do Not Retry: 1 00:24:04.527 Error Location: 0x28 00:24:04.527 LBA: 0x0 00:24:04.527 Namespace: 0x0 00:24:04.527 Vendor Log Page: 0x0 00:24:04.527 ----------- 00:24:04.527 Entry: 1 00:24:04.527 Error Count: 0x2 00:24:04.527 Submission Queue Id: 0x0 00:24:04.527 Command Id: 0x5 00:24:04.527 Phase Bit: 0 00:24:04.527 Status Code: 0x2 00:24:04.527 Status Code Type: 0x0 00:24:04.527 Do Not Retry: 1 00:24:04.527 Error Location: 0x28 00:24:04.527 LBA: 0x0 00:24:04.527 Namespace: 0x0 00:24:04.527 Vendor Log Page: 0x0 00:24:04.527 ----------- 00:24:04.527 Entry: 2 00:24:04.527 Error Count: 0x1 00:24:04.527 Submission Queue Id: 0x0 00:24:04.527 Command Id: 0x4 00:24:04.527 Phase Bit: 0 00:24:04.527 Status Code: 0x2 00:24:04.527 Status Code Type: 0x0 00:24:04.527 Do Not Retry: 1 00:24:04.527 Error Location: 0x28 00:24:04.527 LBA: 0x0 00:24:04.527 Namespace: 0x0 00:24:04.527 Vendor Log Page: 0x0 00:24:04.527 00:24:04.527 Number of Queues 00:24:04.527 ================ 00:24:04.527 Number of I/O Submission Queues: 128 00:24:04.527 Number of I/O Completion Queues: 128 00:24:04.527 00:24:04.527 ZNS Specific Controller Data 00:24:04.527 ============================ 00:24:04.527 Zone Append Size Limit: 0 00:24:04.527 00:24:04.527 00:24:04.527 Active Namespaces 00:24:04.527 ================= 00:24:04.527 get_feature(0x05) failed 00:24:04.527 Namespace ID:1 00:24:04.527 Command Set Identifier: NVM (00h) 00:24:04.527 Deallocate: Supported 00:24:04.527 Deallocated/Unwritten Error: Not Supported 00:24:04.527 Deallocated Read Value: Unknown 00:24:04.527 Deallocate in Write Zeroes: Not Supported 00:24:04.527 Deallocated Guard Field: 0xFFFF 00:24:04.527 Flush: Supported 00:24:04.527 Reservation: Not Supported 00:24:04.527 Namespace Sharing Capabilities: Multiple Controllers 00:24:04.527 Size (in LBAs): 1310720 (5GiB) 00:24:04.527 Capacity (in LBAs): 1310720 (5GiB) 00:24:04.527 Utilization (in LBAs): 1310720 (5GiB) 00:24:04.527 UUID: 7d6dd8fc-309a-49c0-a5b8-2933d33487d9 00:24:04.527 Thin Provisioning: Not Supported 00:24:04.527 Per-NS Atomic Units: Yes 00:24:04.527 Atomic Boundary Size (Normal): 0 00:24:04.527 Atomic Boundary Size (PFail): 0 00:24:04.527 Atomic Boundary Offset: 0 00:24:04.527 NGUID/EUI64 Never Reused: No 00:24:04.527 ANA group ID: 1 00:24:04.527 Namespace Write Protected: No 00:24:04.527 Number of LBA Formats: 1 00:24:04.527 Current LBA Format: LBA Format #00 00:24:04.527 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:24:04.527 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.527 rmmod nvme_tcp 00:24:04.527 rmmod nvme_fabrics 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.527 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.528 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.528 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:04.528 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:04.528 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:04.528 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:04.528 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.528 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:04.787 09:25:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:05.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:05.726 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:05.726 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:05.726 00:24:05.726 real 0m3.428s 00:24:05.726 user 0m1.221s 00:24:05.726 sys 0m1.586s 00:24:05.726 09:25:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.726 09:25:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.726 ************************************ 00:24:05.726 END TEST nvmf_identify_kernel_target 00:24:05.726 ************************************ 00:24:05.726 09:25:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:05.726 09:25:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:05.726 09:25:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.726 09:25:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.726 ************************************ 00:24:05.726 START TEST nvmf_auth_host 00:24:05.726 ************************************ 00:24:05.726 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:05.986 * Looking for test storage... 00:24:05.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:05.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.986 --rc genhtml_branch_coverage=1 00:24:05.986 --rc genhtml_function_coverage=1 00:24:05.986 --rc genhtml_legend=1 00:24:05.986 --rc geninfo_all_blocks=1 00:24:05.986 --rc geninfo_unexecuted_blocks=1 00:24:05.986 00:24:05.986 ' 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:05.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.986 --rc genhtml_branch_coverage=1 00:24:05.986 --rc genhtml_function_coverage=1 00:24:05.986 --rc genhtml_legend=1 00:24:05.986 --rc geninfo_all_blocks=1 00:24:05.986 --rc geninfo_unexecuted_blocks=1 00:24:05.986 00:24:05.986 ' 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:05.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.986 --rc genhtml_branch_coverage=1 00:24:05.986 --rc genhtml_function_coverage=1 00:24:05.986 --rc genhtml_legend=1 00:24:05.986 --rc geninfo_all_blocks=1 00:24:05.986 --rc geninfo_unexecuted_blocks=1 00:24:05.986 00:24:05.986 ' 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:05.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.986 --rc genhtml_branch_coverage=1 00:24:05.986 --rc genhtml_function_coverage=1 00:24:05.986 --rc genhtml_legend=1 00:24:05.986 --rc geninfo_all_blocks=1 00:24:05.986 --rc geninfo_unexecuted_blocks=1 00:24:05.986 00:24:05.986 ' 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.986 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.987 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:05.987 Cannot find device "nvmf_init_br" 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:05.987 Cannot find device "nvmf_init_br2" 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:05.987 Cannot find device "nvmf_tgt_br" 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:05.987 Cannot find device "nvmf_tgt_br2" 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:05.987 Cannot find device "nvmf_init_br" 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:24:05.987 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:06.245 Cannot find device "nvmf_init_br2" 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:06.245 Cannot find device "nvmf_tgt_br" 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:06.245 Cannot find device "nvmf_tgt_br2" 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:06.245 Cannot find device "nvmf_br" 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:06.245 Cannot find device "nvmf_init_if" 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:06.245 Cannot find device "nvmf_init_if2" 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:06.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:06.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:06.245 09:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:06.245 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:06.504 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:06.504 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:24:06.504 00:24:06.504 --- 10.0.0.3 ping statistics --- 00:24:06.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.504 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:06.504 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:06.504 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:24:06.504 00:24:06.504 --- 10.0.0.4 ping statistics --- 00:24:06.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.504 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:06.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:24:06.504 00:24:06.504 --- 10.0.0.1 ping statistics --- 00:24:06.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.504 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:06.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:24:06.504 00:24:06.504 --- 10.0.0.2 ping statistics --- 00:24:06.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.504 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=86270 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 86270 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 86270 ']' 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.504 09:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.441 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.441 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:07.441 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:07.441 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.441 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=346c67d653855624504c52d03139dfa3 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wcv 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 346c67d653855624504c52d03139dfa3 0 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 346c67d653855624504c52d03139dfa3 0 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=346c67d653855624504c52d03139dfa3 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wcv 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wcv 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.wcv 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f3e83307025e5b571316e687d43c938e129d2981245030147298117ab2e1518b 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hnC 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f3e83307025e5b571316e687d43c938e129d2981245030147298117ab2e1518b 3 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f3e83307025e5b571316e687d43c938e129d2981245030147298117ab2e1518b 3 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f3e83307025e5b571316e687d43c938e129d2981245030147298117ab2e1518b 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hnC 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hnC 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hnC 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=52d9d0d4333cfb2887cf372750b31eaa22406d7255eea2c0 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VoT 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 52d9d0d4333cfb2887cf372750b31eaa22406d7255eea2c0 0 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 52d9d0d4333cfb2887cf372750b31eaa22406d7255eea2c0 0 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=52d9d0d4333cfb2887cf372750b31eaa22406d7255eea2c0 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VoT 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VoT 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.VoT 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:07.701 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7a8cda538aebc16f863861a88403d25dd00ad2863a3ca7a3 00:24:07.702 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:07.702 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4Ui 00:24:07.702 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7a8cda538aebc16f863861a88403d25dd00ad2863a3ca7a3 2 00:24:07.702 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7a8cda538aebc16f863861a88403d25dd00ad2863a3ca7a3 2 00:24:07.702 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:07.702 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:07.702 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7a8cda538aebc16f863861a88403d25dd00ad2863a3ca7a3 00:24:07.702 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:07.702 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:07.960 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4Ui 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4Ui 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.4Ui 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=13a5498aed3f89243bc642c525f0d8fb 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.YnV 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 13a5498aed3f89243bc642c525f0d8fb 1 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 13a5498aed3f89243bc642c525f0d8fb 1 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=13a5498aed3f89243bc642c525f0d8fb 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.YnV 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.YnV 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.YnV 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=54771d47cd76307934b8ec7f14d30f4d 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Otn 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 54771d47cd76307934b8ec7f14d30f4d 1 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 54771d47cd76307934b8ec7f14d30f4d 1 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=54771d47cd76307934b8ec7f14d30f4d 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Otn 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Otn 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Otn 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a4532ecd23e0a7c27d247377d38c0d59c8e8a48311631e45 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FfL 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a4532ecd23e0a7c27d247377d38c0d59c8e8a48311631e45 2 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a4532ecd23e0a7c27d247377d38c0d59c8e8a48311631e45 2 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a4532ecd23e0a7c27d247377d38c0d59c8e8a48311631e45 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FfL 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FfL 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.FfL 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9393dfcfd6380bc7dc73ba5c94c2de62 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cId 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9393dfcfd6380bc7dc73ba5c94c2de62 0 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9393dfcfd6380bc7dc73ba5c94c2de62 0 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9393dfcfd6380bc7dc73ba5c94c2de62 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cId 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cId 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.cId 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6b0d033f2092f04730a72f8bf4db78f11a0cc2c1db1b395cdbcc8cb4a06093ae 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:07.961 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.x8r 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6b0d033f2092f04730a72f8bf4db78f11a0cc2c1db1b395cdbcc8cb4a06093ae 3 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6b0d033f2092f04730a72f8bf4db78f11a0cc2c1db1b395cdbcc8cb4a06093ae 3 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6b0d033f2092f04730a72f8bf4db78f11a0cc2c1db1b395cdbcc8cb4a06093ae 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.x8r 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.x8r 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.x8r 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 86270 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 86270 ']' 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.220 09:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wcv 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hnC ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hnC 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.VoT 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.4Ui ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Ui 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.YnV 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Otn ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Otn 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.FfL 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.cId ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.cId 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.x8r 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:08.480 09:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:08.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:08.998 Waiting for block devices as requested 00:24:08.998 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:08.998 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:09.566 No valid GPT data, bailing 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:09.566 No valid GPT data, bailing 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:09.566 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:24:09.567 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:24:09.567 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:09.567 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:09.567 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:24:09.567 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:09.567 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:09.825 No valid GPT data, bailing 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:09.826 No valid GPT data, bailing 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -a 10.0.0.1 -t tcp -s 4420 00:24:09.826 00:24:09.826 Discovery Log Number of Records 2, Generation counter 2 00:24:09.826 =====Discovery Log Entry 0====== 00:24:09.826 trtype: tcp 00:24:09.826 adrfam: ipv4 00:24:09.826 subtype: current discovery subsystem 00:24:09.826 treq: not specified, sq flow control disable supported 00:24:09.826 portid: 1 00:24:09.826 trsvcid: 4420 00:24:09.826 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:09.826 traddr: 10.0.0.1 00:24:09.826 eflags: none 00:24:09.826 sectype: none 00:24:09.826 =====Discovery Log Entry 1====== 00:24:09.826 trtype: tcp 00:24:09.826 adrfam: ipv4 00:24:09.826 subtype: nvme subsystem 00:24:09.826 treq: not specified, sq flow control disable supported 00:24:09.826 portid: 1 00:24:09.826 trsvcid: 4420 00:24:09.826 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:09.826 traddr: 10.0.0.1 00:24:09.826 eflags: none 00:24:09.826 sectype: none 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.826 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.085 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.086 nvme0n1 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.086 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.346 09:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.346 nvme0n1 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.346 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.606 nvme0n1 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.606 nvme0n1 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.606 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.866 nvme0n1 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.866 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.867 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.867 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.867 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.867 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.867 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.867 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.125 nvme0n1 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.126 09:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.386 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.387 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.387 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.387 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.387 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.387 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.387 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.387 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.387 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.387 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.387 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.646 nvme0n1 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.646 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.906 nvme0n1 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.906 nvme0n1 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.906 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.166 nvme0n1 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.166 09:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.166 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.166 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.167 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.426 nvme0n1 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.426 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:12.994 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.995 09:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.254 nvme0n1 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.254 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.255 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.514 nvme0n1 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.514 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.774 nvme0n1 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.774 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.033 nvme0n1 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.033 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.034 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.034 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.034 09:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.292 nvme0n1 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.292 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:14.293 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:14.293 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.293 09:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:16.236 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.237 nvme0n1 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.237 09:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.237 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.502 nvme0n1 00:24:16.502 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.502 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.502 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.503 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.503 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:16.761 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.762 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.020 nvme0n1 00:24:17.020 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.020 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.020 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.021 09:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.280 nvme0n1 00:24:17.280 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.280 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.280 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.280 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.280 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.539 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.798 nvme0n1 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.798 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.799 09:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.367 nvme0n1 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.367 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.935 nvme0n1 00:24:18.935 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.935 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.935 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.935 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.935 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.935 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.935 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.935 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.935 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.935 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.194 09:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 nvme0n1 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.762 09:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.330 nvme0n1 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.330 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 nvme0n1 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.899 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.158 nvme0n1 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.158 nvme0n1 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.158 09:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.158 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.158 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.158 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.158 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.417 nvme0n1 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.417 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.676 nvme0n1 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.676 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.677 nvme0n1 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.677 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.936 nvme0n1 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.936 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.196 nvme0n1 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.196 09:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.196 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.455 nvme0n1 00:24:22.455 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.455 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.455 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.455 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.455 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.455 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.455 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.455 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.455 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.456 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.714 nvme0n1 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.714 nvme0n1 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.714 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.973 nvme0n1 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.973 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.232 09:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.232 nvme0n1 00:24:23.232 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.232 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.232 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.232 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.232 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.232 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.491 nvme0n1 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.491 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.750 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.750 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.750 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.751 nvme0n1 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.751 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.010 nvme0n1 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.010 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.270 09:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.529 nvme0n1 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.529 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.097 nvme0n1 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.097 09:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.356 nvme0n1 00:24:25.356 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.356 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.356 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.356 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.356 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.356 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.357 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.925 nvme0n1 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.925 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.185 nvme0n1 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.185 09:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.753 nvme0n1 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.753 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.754 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.754 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.754 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.754 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.754 09:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.321 nvme0n1 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:27.321 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.322 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.889 nvme0n1 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:27.889 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.890 09:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.457 nvme0n1 00:24:28.457 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.457 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.457 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.457 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.457 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.457 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.457 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.457 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.457 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.457 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.716 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.975 nvme0n1 00:24:28.975 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.975 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.975 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.975 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.975 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.235 09:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.235 nvme0n1 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.235 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.494 nvme0n1 00:24:29.494 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.494 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.494 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.495 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.754 nvme0n1 00:24:29.754 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.754 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.754 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.754 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.754 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.754 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.754 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.754 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.755 nvme0n1 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.755 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.014 nvme0n1 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.014 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.015 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.274 nvme0n1 00:24:30.274 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.274 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.274 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.274 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.274 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.274 09:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.274 nvme0n1 00:24:30.274 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.534 nvme0n1 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.534 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.794 nvme0n1 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.794 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.054 nvme0n1 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.054 09:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.313 nvme0n1 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.313 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.571 nvme0n1 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.572 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.830 nvme0n1 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.830 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.089 nvme0n1 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.089 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.090 09:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.349 nvme0n1 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.349 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 nvme0n1 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.917 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.918 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.177 nvme0n1 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.177 09:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.436 nvme0n1 00:24:33.436 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.436 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.436 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.436 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.436 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.436 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.695 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.696 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.955 nvme0n1 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.955 09:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.217 nvme0n1 00:24:34.217 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ2YzY3ZDY1Mzg1NTYyNDUwNGM1MmQwMzEzOWRmYTPv280O: 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: ]] 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjNlODMzMDcwMjVlNWI1NzEzMTZlNjg3ZDQzYzkzOGUxMjlkMjk4MTI0NTAzMDE0NzI5ODExN2FiMmUxNTE4YjcveJE=: 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.512 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.079 nvme0n1 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.079 09:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.647 nvme0n1 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.647 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.214 nvme0n1 00:24:36.214 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.214 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.214 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.214 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.214 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.214 09:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTQ1MzJlY2QyM2UwYTdjMjdkMjQ3Mzc3ZDM4YzBkNTljOGU4YTQ4MzExNjMxZTQ1/eubQg==: 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: ]] 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5M2RmY2ZkNjM4MGJjN2RjNzNiYTVjOTRjMmRlNjLzu+OD: 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.214 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.782 nvme0n1 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmIwZDAzM2YyMDkyZjA0NzMwYTcyZjhiZjRkYjc4ZjExYTBjYzJjMWRiMWIzOTVjZGJjYzhjYjRhMDYwOTNhZaVuWWU=: 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.782 09:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.349 nvme0n1 00:24:37.349 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.349 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.349 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.349 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.349 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.349 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.349 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.349 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.349 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.349 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.608 request: 00:24:37.608 { 00:24:37.608 "name": "nvme0", 00:24:37.608 "trtype": "tcp", 00:24:37.608 "traddr": "10.0.0.1", 00:24:37.608 "adrfam": "ipv4", 00:24:37.608 "trsvcid": "4420", 00:24:37.608 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:37.608 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:37.608 "prchk_reftag": false, 00:24:37.608 "prchk_guard": false, 00:24:37.608 "hdgst": false, 00:24:37.608 "ddgst": false, 00:24:37.608 "allow_unrecognized_csi": false, 00:24:37.608 "method": "bdev_nvme_attach_controller", 00:24:37.608 "req_id": 1 00:24:37.608 } 00:24:37.608 Got JSON-RPC error response 00:24:37.608 response: 00:24:37.608 { 00:24:37.608 "code": -5, 00:24:37.608 "message": "Input/output error" 00:24:37.608 } 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.608 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.609 request: 00:24:37.609 { 00:24:37.609 "name": "nvme0", 00:24:37.609 "trtype": "tcp", 00:24:37.609 "traddr": "10.0.0.1", 00:24:37.609 "adrfam": "ipv4", 00:24:37.609 "trsvcid": "4420", 00:24:37.609 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:37.609 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:37.609 "prchk_reftag": false, 00:24:37.609 "prchk_guard": false, 00:24:37.609 "hdgst": false, 00:24:37.609 "ddgst": false, 00:24:37.609 "dhchap_key": "key2", 00:24:37.609 "allow_unrecognized_csi": false, 00:24:37.609 "method": "bdev_nvme_attach_controller", 00:24:37.609 "req_id": 1 00:24:37.609 } 00:24:37.609 Got JSON-RPC error response 00:24:37.609 response: 00:24:37.609 { 00:24:37.609 "code": -5, 00:24:37.609 "message": "Input/output error" 00:24:37.609 } 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.609 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.868 request: 00:24:37.868 { 00:24:37.868 "name": "nvme0", 00:24:37.868 "trtype": "tcp", 00:24:37.868 "traddr": "10.0.0.1", 00:24:37.868 "adrfam": "ipv4", 00:24:37.868 "trsvcid": "4420", 00:24:37.868 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:37.868 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:37.868 "prchk_reftag": false, 00:24:37.868 "prchk_guard": false, 00:24:37.868 "hdgst": false, 00:24:37.868 "ddgst": false, 00:24:37.868 "dhchap_key": "key1", 00:24:37.868 "dhchap_ctrlr_key": "ckey2", 00:24:37.868 "allow_unrecognized_csi": false, 00:24:37.868 "method": "bdev_nvme_attach_controller", 00:24:37.868 "req_id": 1 00:24:37.868 } 00:24:37.868 Got JSON-RPC error response 00:24:37.868 response: 00:24:37.868 { 00:24:37.868 "code": -5, 00:24:37.868 "message": "Input/output error" 00:24:37.868 } 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.868 nvme0n1 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.868 request: 00:24:37.868 { 00:24:37.868 "name": "nvme0", 00:24:37.868 "dhchap_key": "key1", 00:24:37.868 "dhchap_ctrlr_key": "ckey2", 00:24:37.868 "method": "bdev_nvme_set_keys", 00:24:37.868 "req_id": 1 00:24:37.868 } 00:24:37.868 Got JSON-RPC error response 00:24:37.868 response: 00:24:37.868 { 00:24:37.868 "code": -13, 00:24:37.868 "message": "Permission denied" 00:24:37.868 } 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.868 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.126 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.126 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:38.126 09:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTJkOWQwZDQzMzNjZmIyODg3Y2YzNzI3NTBiMzFlYWEyMjQwNmQ3MjU1ZWVhMmMwI4efHQ==: 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: ]] 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2E4Y2RhNTM4YWViYzE2Zjg2Mzg2MWE4ODQwM2QyNWRkMDBhZDI4NjNhM2NhN2Ez/cA1WQ==: 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.061 nvme0n1 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:39.061 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTNhNTQ5OGFlZDNmODkyNDNiYzY0MmM1MjVmMGQ4ZmKFXqU1: 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: ]] 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTQ3NzFkNDdjZDc2MzA3OTM0YjhlYzdmMTRkMzBmNGRiNsWM: 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:39.062 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.320 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.320 request: 00:24:39.320 { 00:24:39.320 "name": "nvme0", 00:24:39.320 "dhchap_key": "key2", 00:24:39.320 "dhchap_ctrlr_key": "ckey1", 00:24:39.320 "method": "bdev_nvme_set_keys", 00:24:39.320 "req_id": 1 00:24:39.320 } 00:24:39.320 Got JSON-RPC error response 00:24:39.320 response: 00:24:39.320 { 00:24:39.320 "code": -13, 00:24:39.320 "message": "Permission denied" 00:24:39.320 } 00:24:39.320 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:39.320 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:39.320 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:39.320 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:39.320 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:39.320 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.320 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:39.320 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.320 09:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.320 09:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.320 09:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:39.320 09:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.256 rmmod nvme_tcp 00:24:40.256 rmmod nvme_fabrics 00:24:40.256 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 86270 ']' 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 86270 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 86270 ']' 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 86270 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86270 00:24:40.515 killing process with pid 86270 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86270' 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 86270 00:24:40.515 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 86270 00:24:41.451 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.451 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.451 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.451 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:41.451 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:41.451 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.451 09:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:41.451 09:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:42.387 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:42.387 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:42.387 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:42.387 09:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.wcv /tmp/spdk.key-null.VoT /tmp/spdk.key-sha256.YnV /tmp/spdk.key-sha384.FfL /tmp/spdk.key-sha512.x8r /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:42.387 09:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:42.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:42.646 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:42.646 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:42.904 00:24:42.904 real 0m36.978s 00:24:42.904 user 0m33.724s 00:24:42.904 sys 0m4.041s 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.904 ************************************ 00:24:42.904 END TEST nvmf_auth_host 00:24:42.904 ************************************ 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.904 ************************************ 00:24:42.904 START TEST nvmf_digest 00:24:42.904 ************************************ 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:42.904 * Looking for test storage... 00:24:42.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:42.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.904 --rc genhtml_branch_coverage=1 00:24:42.904 --rc genhtml_function_coverage=1 00:24:42.904 --rc genhtml_legend=1 00:24:42.904 --rc geninfo_all_blocks=1 00:24:42.904 --rc geninfo_unexecuted_blocks=1 00:24:42.904 00:24:42.904 ' 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:42.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.904 --rc genhtml_branch_coverage=1 00:24:42.904 --rc genhtml_function_coverage=1 00:24:42.904 --rc genhtml_legend=1 00:24:42.904 --rc geninfo_all_blocks=1 00:24:42.904 --rc geninfo_unexecuted_blocks=1 00:24:42.904 00:24:42.904 ' 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:42.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.904 --rc genhtml_branch_coverage=1 00:24:42.904 --rc genhtml_function_coverage=1 00:24:42.904 --rc genhtml_legend=1 00:24:42.904 --rc geninfo_all_blocks=1 00:24:42.904 --rc geninfo_unexecuted_blocks=1 00:24:42.904 00:24:42.904 ' 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:42.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.904 --rc genhtml_branch_coverage=1 00:24:42.904 --rc genhtml_function_coverage=1 00:24:42.904 --rc genhtml_legend=1 00:24:42.904 --rc geninfo_all_blocks=1 00:24:42.904 --rc geninfo_unexecuted_blocks=1 00:24:42.904 00:24:42.904 ' 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:42.904 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:24:43.163 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.164 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:43.164 Cannot find device "nvmf_init_br" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:43.164 Cannot find device "nvmf_init_br2" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:43.164 Cannot find device "nvmf_tgt_br" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:43.164 Cannot find device "nvmf_tgt_br2" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:43.164 Cannot find device "nvmf_init_br" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:43.164 Cannot find device "nvmf_init_br2" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:43.164 Cannot find device "nvmf_tgt_br" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:43.164 Cannot find device "nvmf_tgt_br2" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:43.164 Cannot find device "nvmf_br" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:43.164 Cannot find device "nvmf_init_if" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:43.164 Cannot find device "nvmf_init_if2" 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:43.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:43.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:43.164 09:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:43.164 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:43.164 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:43.165 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:43.165 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:43.165 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:43.165 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:43.165 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:43.165 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:43.165 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:43.423 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:43.423 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:43.423 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:43.423 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:43.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:43.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:24:43.424 00:24:43.424 --- 10.0.0.3 ping statistics --- 00:24:43.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.424 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:43.424 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:43.424 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:24:43.424 00:24:43.424 --- 10.0.0.4 ping statistics --- 00:24:43.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.424 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:43.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:24:43.424 00:24:43.424 --- 10.0.0.1 ping statistics --- 00:24:43.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.424 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:43.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:24:43.424 00:24:43.424 --- 10.0.0.2 ping statistics --- 00:24:43.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.424 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:43.424 ************************************ 00:24:43.424 START TEST nvmf_digest_clean 00:24:43.424 ************************************ 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=87917 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 87917 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 87917 ']' 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.424 09:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:43.682 [2024-12-13 09:26:37.338386] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:43.682 [2024-12-13 09:26:37.338537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.682 [2024-12-13 09:26:37.519746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.941 [2024-12-13 09:26:37.600406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.941 [2024-12-13 09:26:37.600457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.941 [2024-12-13 09:26:37.600473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.941 [2024-12-13 09:26:37.600493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.941 [2024-12-13 09:26:37.600505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.941 [2024-12-13 09:26:37.601465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.508 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:44.767 [2024-12-13 09:26:38.510691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:44.767 null0 00:24:44.767 [2024-12-13 09:26:38.618103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.767 [2024-12-13 09:26:38.642306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87953 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87953 /var/tmp/bperf.sock 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 87953 ']' 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:44.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.767 09:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:45.026 [2024-12-13 09:26:38.759174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:45.026 [2024-12-13 09:26:38.759353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87953 ] 00:24:45.284 [2024-12-13 09:26:38.934642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.284 [2024-12-13 09:26:39.019043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.851 09:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.851 09:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:45.851 09:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:45.851 09:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:45.851 09:26:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:46.419 [2024-12-13 09:26:40.077499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:46.419 09:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:46.419 09:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:46.677 nvme0n1 00:24:46.677 09:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:46.678 09:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:46.936 Running I/O for 2 seconds... 00:24:48.810 14351.00 IOPS, 56.06 MiB/s [2024-12-13T09:26:42.700Z] 14414.50 IOPS, 56.31 MiB/s 00:24:48.810 Latency(us) 00:24:48.810 [2024-12-13T09:26:42.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.810 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:48.810 nvme0n1 : 2.01 14419.74 56.33 0.00 0.00 8870.51 8340.95 24188.74 00:24:48.810 [2024-12-13T09:26:42.700Z] =================================================================================================================== 00:24:48.810 [2024-12-13T09:26:42.700Z] Total : 14419.74 56.33 0.00 0.00 8870.51 8340.95 24188.74 00:24:48.810 { 00:24:48.810 "results": [ 00:24:48.810 { 00:24:48.810 "job": "nvme0n1", 00:24:48.810 "core_mask": "0x2", 00:24:48.810 "workload": "randread", 00:24:48.810 "status": "finished", 00:24:48.810 "queue_depth": 128, 00:24:48.810 "io_size": 4096, 00:24:48.810 "runtime": 2.00815, 00:24:48.810 "iops": 14419.739561287752, 00:24:48.810 "mibps": 56.32710766128028, 00:24:48.810 "io_failed": 0, 00:24:48.810 "io_timeout": 0, 00:24:48.810 "avg_latency_us": 8870.512718858998, 00:24:48.810 "min_latency_us": 8340.945454545454, 00:24:48.810 "max_latency_us": 24188.741818181818 00:24:48.810 } 00:24:48.810 ], 00:24:48.810 "core_count": 1 00:24:48.810 } 00:24:48.810 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:48.810 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:48.810 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:48.810 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:48.810 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:48.810 | select(.opcode=="crc32c") 00:24:48.810 | "\(.module_name) \(.executed)"' 00:24:49.069 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:49.069 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:49.069 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:49.069 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:49.069 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87953 00:24:49.069 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 87953 ']' 00:24:49.069 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 87953 00:24:49.069 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:49.069 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.069 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87953 00:24:49.070 killing process with pid 87953 00:24:49.070 Received shutdown signal, test time was about 2.000000 seconds 00:24:49.070 00:24:49.070 Latency(us) 00:24:49.070 [2024-12-13T09:26:42.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.070 [2024-12-13T09:26:42.960Z] =================================================================================================================== 00:24:49.070 [2024-12-13T09:26:42.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.070 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:49.070 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:49.070 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87953' 00:24:49.070 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 87953 00:24:49.070 09:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 87953 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=88020 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 88020 /var/tmp/bperf.sock 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 88020 ']' 00:24:50.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.007 09:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.007 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:50.007 Zero copy mechanism will not be used. 00:24:50.007 [2024-12-13 09:26:43.887432] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:50.007 [2024-12-13 09:26:43.887594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88020 ] 00:24:50.266 [2024-12-13 09:26:44.063421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.266 [2024-12-13 09:26:44.144049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.202 09:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.202 09:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:51.202 09:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:51.202 09:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:51.202 09:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:51.461 [2024-12-13 09:26:45.127969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:51.461 09:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.461 09:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.720 nvme0n1 00:24:51.720 09:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:51.720 09:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:51.720 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:51.720 Zero copy mechanism will not be used. 00:24:51.720 Running I/O for 2 seconds... 00:24:54.034 7312.00 IOPS, 914.00 MiB/s [2024-12-13T09:26:47.924Z] 7304.00 IOPS, 913.00 MiB/s 00:24:54.034 Latency(us) 00:24:54.034 [2024-12-13T09:26:47.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.034 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:54.034 nvme0n1 : 2.00 7300.96 912.62 0.00 0.00 2188.18 1951.19 5242.88 00:24:54.034 [2024-12-13T09:26:47.924Z] =================================================================================================================== 00:24:54.034 [2024-12-13T09:26:47.924Z] Total : 7300.96 912.62 0.00 0.00 2188.18 1951.19 5242.88 00:24:54.034 { 00:24:54.034 "results": [ 00:24:54.034 { 00:24:54.034 "job": "nvme0n1", 00:24:54.034 "core_mask": "0x2", 00:24:54.034 "workload": "randread", 00:24:54.034 "status": "finished", 00:24:54.034 "queue_depth": 16, 00:24:54.034 "io_size": 131072, 00:24:54.034 "runtime": 2.003025, 00:24:54.034 "iops": 7300.957302080603, 00:24:54.034 "mibps": 912.6196627600754, 00:24:54.034 "io_failed": 0, 00:24:54.034 "io_timeout": 0, 00:24:54.034 "avg_latency_us": 2188.1751501889794, 00:24:54.034 "min_latency_us": 1951.1854545454546, 00:24:54.034 "max_latency_us": 5242.88 00:24:54.034 } 00:24:54.034 ], 00:24:54.034 "core_count": 1 00:24:54.034 } 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:54.034 | select(.opcode=="crc32c") 00:24:54.034 | "\(.module_name) \(.executed)"' 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 88020 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 88020 ']' 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 88020 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.034 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88020 00:24:54.293 killing process with pid 88020 00:24:54.293 Received shutdown signal, test time was about 2.000000 seconds 00:24:54.293 00:24:54.293 Latency(us) 00:24:54.293 [2024-12-13T09:26:48.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.293 [2024-12-13T09:26:48.183Z] =================================================================================================================== 00:24:54.293 [2024-12-13T09:26:48.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.293 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:54.293 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:54.293 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88020' 00:24:54.293 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 88020 00:24:54.293 09:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 88020 00:24:55.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=88088 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 88088 /var/tmp/bperf.sock 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 88088 ']' 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.240 09:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:55.240 [2024-12-13 09:26:48.843960] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:55.240 [2024-12-13 09:26:48.844105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88088 ] 00:24:55.240 [2024-12-13 09:26:49.003263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.240 [2024-12-13 09:26:49.083773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.851 09:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.851 09:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:55.851 09:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:55.851 09:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:55.851 09:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:56.421 [2024-12-13 09:26:50.108511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:56.421 09:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.421 09:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.679 nvme0n1 00:24:56.679 09:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:56.679 09:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:56.937 Running I/O for 2 seconds... 00:24:58.810 15876.00 IOPS, 62.02 MiB/s [2024-12-13T09:26:52.700Z] 15875.50 IOPS, 62.01 MiB/s 00:24:58.810 Latency(us) 00:24:58.810 [2024-12-13T09:26:52.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.810 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:58.810 nvme0n1 : 2.01 15881.21 62.04 0.00 0.00 8052.46 2502.28 17515.99 00:24:58.810 [2024-12-13T09:26:52.700Z] =================================================================================================================== 00:24:58.810 [2024-12-13T09:26:52.700Z] Total : 15881.21 62.04 0.00 0.00 8052.46 2502.28 17515.99 00:24:58.810 { 00:24:58.810 "results": [ 00:24:58.810 { 00:24:58.810 "job": "nvme0n1", 00:24:58.810 "core_mask": "0x2", 00:24:58.810 "workload": "randwrite", 00:24:58.810 "status": "finished", 00:24:58.810 "queue_depth": 128, 00:24:58.810 "io_size": 4096, 00:24:58.810 "runtime": 2.007341, 00:24:58.810 "iops": 15881.208025940785, 00:24:58.810 "mibps": 62.03596885133119, 00:24:58.810 "io_failed": 0, 00:24:58.810 "io_timeout": 0, 00:24:58.810 "avg_latency_us": 8052.460876781238, 00:24:58.810 "min_latency_us": 2502.2836363636366, 00:24:58.810 "max_latency_us": 17515.985454545455 00:24:58.810 } 00:24:58.810 ], 00:24:58.810 "core_count": 1 00:24:58.810 } 00:24:58.810 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:58.810 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:58.810 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:58.810 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:58.810 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:58.810 | select(.opcode=="crc32c") 00:24:58.810 | "\(.module_name) \(.executed)"' 00:24:59.378 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:59.378 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:59.378 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:59.378 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:59.378 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 88088 00:24:59.378 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 88088 ']' 00:24:59.378 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 88088 00:24:59.378 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:59.378 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.379 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88088 00:24:59.379 killing process with pid 88088 00:24:59.379 Received shutdown signal, test time was about 2.000000 seconds 00:24:59.379 00:24:59.379 Latency(us) 00:24:59.379 [2024-12-13T09:26:53.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.379 [2024-12-13T09:26:53.269Z] =================================================================================================================== 00:24:59.379 [2024-12-13T09:26:53.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.379 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:59.379 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:59.379 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88088' 00:24:59.379 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 88088 00:24:59.379 09:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 88088 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=88155 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 88155 /var/tmp/bperf.sock 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 88155 ']' 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:59.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.947 09:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:00.206 [2024-12-13 09:26:53.912144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:00.206 [2024-12-13 09:26:53.912332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88155 ] 00:25:00.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:00.206 Zero copy mechanism will not be used. 00:25:00.206 [2024-12-13 09:26:54.081958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.465 [2024-12-13 09:26:54.164579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.032 09:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.032 09:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:01.032 09:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:01.032 09:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:01.032 09:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:01.599 [2024-12-13 09:26:55.205352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:01.599 09:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.599 09:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.858 nvme0n1 00:25:01.858 09:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:01.858 09:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:01.858 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:01.858 Zero copy mechanism will not be used. 00:25:01.858 Running I/O for 2 seconds... 00:25:04.172 5855.00 IOPS, 731.88 MiB/s [2024-12-13T09:26:58.062Z] 5894.50 IOPS, 736.81 MiB/s 00:25:04.172 Latency(us) 00:25:04.172 [2024-12-13T09:26:58.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.172 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:04.172 nvme0n1 : 2.00 5891.10 736.39 0.00 0.00 2708.85 2442.71 7357.91 00:25:04.172 [2024-12-13T09:26:58.062Z] =================================================================================================================== 00:25:04.172 [2024-12-13T09:26:58.062Z] Total : 5891.10 736.39 0.00 0.00 2708.85 2442.71 7357.91 00:25:04.172 { 00:25:04.172 "results": [ 00:25:04.172 { 00:25:04.172 "job": "nvme0n1", 00:25:04.172 "core_mask": "0x2", 00:25:04.172 "workload": "randwrite", 00:25:04.172 "status": "finished", 00:25:04.172 "queue_depth": 16, 00:25:04.172 "io_size": 131072, 00:25:04.172 "runtime": 2.004378, 00:25:04.172 "iops": 5891.104372528535, 00:25:04.172 "mibps": 736.3880465660669, 00:25:04.172 "io_failed": 0, 00:25:04.172 "io_timeout": 0, 00:25:04.172 "avg_latency_us": 2708.8483271741807, 00:25:04.172 "min_latency_us": 2442.7054545454544, 00:25:04.172 "max_latency_us": 7357.905454545455 00:25:04.172 } 00:25:04.172 ], 00:25:04.172 "core_count": 1 00:25:04.172 } 00:25:04.172 09:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:04.172 09:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:04.172 09:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:04.172 09:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:04.172 09:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:04.172 | select(.opcode=="crc32c") 00:25:04.172 | "\(.module_name) \(.executed)"' 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 88155 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 88155 ']' 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 88155 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88155 00:25:04.172 killing process with pid 88155 00:25:04.172 Received shutdown signal, test time was about 2.000000 seconds 00:25:04.172 00:25:04.172 Latency(us) 00:25:04.172 [2024-12-13T09:26:58.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.172 [2024-12-13T09:26:58.062Z] =================================================================================================================== 00:25:04.172 [2024-12-13T09:26:58.062Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:04.172 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88155' 00:25:04.173 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 88155 00:25:04.173 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 88155 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 87917 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 87917 ']' 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 87917 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87917 00:25:05.109 killing process with pid 87917 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87917' 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 87917 00:25:05.109 09:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 87917 00:25:06.046 00:25:06.046 real 0m22.465s 00:25:06.046 user 0m43.469s 00:25:06.046 sys 0m4.299s 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.046 ************************************ 00:25:06.046 END TEST nvmf_digest_clean 00:25:06.046 ************************************ 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:06.046 ************************************ 00:25:06.046 START TEST nvmf_digest_error 00:25:06.046 ************************************ 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=88253 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 88253 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 88253 ']' 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.046 09:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.046 [2024-12-13 09:26:59.857747] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:06.046 [2024-12-13 09:26:59.857921] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.306 [2024-12-13 09:27:00.040844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.306 [2024-12-13 09:27:00.125896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.306 [2024-12-13 09:27:00.125968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.306 [2024-12-13 09:27:00.126001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.306 [2024-12-13 09:27:00.126022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.306 [2024-12-13 09:27:00.126036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.306 [2024-12-13 09:27:00.127292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.241 [2024-12-13 09:27:00.856172] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.241 09:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.241 [2024-12-13 09:27:01.009401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:07.241 null0 00:25:07.241 [2024-12-13 09:27:01.111787] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.500 [2024-12-13 09:27:01.135995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88285 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88285 /var/tmp/bperf.sock 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 88285 ']' 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:07.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.500 09:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.500 [2024-12-13 09:27:01.250781] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:07.500 [2024-12-13 09:27:01.251262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88285 ] 00:25:07.759 [2024-12-13 09:27:01.436505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.759 [2024-12-13 09:27:01.559591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.017 [2024-12-13 09:27:01.719791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:08.276 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.276 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:08.276 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:08.276 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:08.535 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:08.535 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.535 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.535 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.535 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:08.535 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:08.794 nvme0n1 00:25:08.794 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:08.794 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.794 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.794 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.794 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:08.794 09:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:09.053 Running I/O for 2 seconds... 00:25:09.053 [2024-12-13 09:27:02.795188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.053 [2024-12-13 09:27:02.795333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.053 [2024-12-13 09:27:02.795358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.053 [2024-12-13 09:27:02.813132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.053 [2024-12-13 09:27:02.813191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.053 [2024-12-13 09:27:02.813213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.053 [2024-12-13 09:27:02.830593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.053 [2024-12-13 09:27:02.830661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.053 [2024-12-13 09:27:02.830681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.053 [2024-12-13 09:27:02.849431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.053 [2024-12-13 09:27:02.849493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.053 [2024-12-13 09:27:02.849517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.053 [2024-12-13 09:27:02.867048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.053 [2024-12-13 09:27:02.867333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.053 [2024-12-13 09:27:02.867365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.053 [2024-12-13 09:27:02.884935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.053 [2024-12-13 09:27:02.885157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.053 [2024-12-13 09:27:02.885181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.053 [2024-12-13 09:27:02.903851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.053 [2024-12-13 09:27:02.903913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.053 [2024-12-13 09:27:02.903935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.053 [2024-12-13 09:27:02.921540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.053 [2024-12-13 09:27:02.921606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.053 [2024-12-13 09:27:02.921625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.053 [2024-12-13 09:27:02.939366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.053 [2024-12-13 09:27:02.939458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.053 [2024-12-13 09:27:02.939477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.312 [2024-12-13 09:27:02.958850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.312 [2024-12-13 09:27:02.958935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.312 [2024-12-13 09:27:02.958960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.312 [2024-12-13 09:27:02.976566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.312 [2024-12-13 09:27:02.976803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.312 [2024-12-13 09:27:02.976827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.312 [2024-12-13 09:27:02.994786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.312 [2024-12-13 09:27:02.994979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.312 [2024-12-13 09:27:02.995011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.312 [2024-12-13 09:27:03.013968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.312 [2024-12-13 09:27:03.014030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.312 [2024-12-13 09:27:03.014052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.312 [2024-12-13 09:27:03.032658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.313 [2024-12-13 09:27:03.032899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.313 [2024-12-13 09:27:03.032924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.313 [2024-12-13 09:27:03.051531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.313 [2024-12-13 09:27:03.051592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.313 [2024-12-13 09:27:03.051613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.313 [2024-12-13 09:27:03.070263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.313 [2024-12-13 09:27:03.070338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.313 [2024-12-13 09:27:03.070357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.313 [2024-12-13 09:27:03.088508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.313 [2024-12-13 09:27:03.088745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.313 [2024-12-13 09:27:03.088775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.313 [2024-12-13 09:27:03.106970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.313 [2024-12-13 09:27:03.107152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.313 [2024-12-13 09:27:03.107214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.313 [2024-12-13 09:27:03.125556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.313 [2024-12-13 09:27:03.125622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.313 [2024-12-13 09:27:03.125641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.313 [2024-12-13 09:27:03.143926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.313 [2024-12-13 09:27:03.143987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.313 [2024-12-13 09:27:03.144008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.313 [2024-12-13 09:27:03.162183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.313 [2024-12-13 09:27:03.162251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.313 [2024-12-13 09:27:03.162269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.313 [2024-12-13 09:27:03.180741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.313 [2024-12-13 09:27:03.180807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.313 [2024-12-13 09:27:03.180827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.313 [2024-12-13 09:27:03.199510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.313 [2024-12-13 09:27:03.199571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.313 [2024-12-13 09:27:03.199624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.572 [2024-12-13 09:27:03.218424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.572 [2024-12-13 09:27:03.218643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.218668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.236983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.237193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.237223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.255472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.255537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.255556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.273600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.273669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.273703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.291885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.291946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.291967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.310101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.310168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.310188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.328446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.328506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.328527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.346660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.346899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.346934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.365051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.365118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.365137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.383370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.383429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.383450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.401488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.401556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.401576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.419614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.419680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.419698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.437752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.437812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.437833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.573 [2024-12-13 09:27:03.455984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.573 [2024-12-13 09:27:03.456051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.573 [2024-12-13 09:27:03.456069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.832 [2024-12-13 09:27:03.475725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.832 [2024-12-13 09:27:03.475785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.832 [2024-12-13 09:27:03.475807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.832 [2024-12-13 09:27:03.494004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.832 [2024-12-13 09:27:03.494065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.832 [2024-12-13 09:27:03.494086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.512299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.512364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.512382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.530435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.530651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.530700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.549032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.549100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.549119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.567236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.567333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.567354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.585615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.585676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.585697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.603953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.604019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.604038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.622130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.622190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.622212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.640592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.640652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.640673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.658752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.658989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.659016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.677224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.677459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.677491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.694955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.695015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.695036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.833 [2024-12-13 09:27:03.712690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:09.833 [2024-12-13 09:27:03.712756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.833 [2024-12-13 09:27:03.712774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.092 [2024-12-13 09:27:03.731792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.092 [2024-12-13 09:27:03.731851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.092 [2024-12-13 09:27:03.731872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.092 [2024-12-13 09:27:03.749238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.092 [2024-12-13 09:27:03.749329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.092 [2024-12-13 09:27:03.749349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.092 13663.00 IOPS, 53.37 MiB/s [2024-12-13T09:27:03.982Z] [2024-12-13 09:27:03.766782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.092 [2024-12-13 09:27:03.766840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.092 [2024-12-13 09:27:03.766900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.092 [2024-12-13 09:27:03.785023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.092 [2024-12-13 09:27:03.785085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.092 [2024-12-13 09:27:03.785122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.092 [2024-12-13 09:27:03.807090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.092 [2024-12-13 09:27:03.807153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.092 [2024-12-13 09:27:03.807212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.092 [2024-12-13 09:27:03.827566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.092 [2024-12-13 09:27:03.827627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.092 [2024-12-13 09:27:03.827664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.092 [2024-12-13 09:27:03.846083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.092 [2024-12-13 09:27:03.846153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.092 [2024-12-13 09:27:03.846172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.092 [2024-12-13 09:27:03.864486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.092 [2024-12-13 09:27:03.864548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.092 [2024-12-13 09:27:03.864570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.093 [2024-12-13 09:27:03.883042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.093 [2024-12-13 09:27:03.883277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.093 [2024-12-13 09:27:03.883302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.093 [2024-12-13 09:27:03.901551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.093 [2024-12-13 09:27:03.901613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.093 [2024-12-13 09:27:03.901638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.093 [2024-12-13 09:27:03.919683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.093 [2024-12-13 09:27:03.919744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.093 [2024-12-13 09:27:03.919766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.093 [2024-12-13 09:27:03.937863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.093 [2024-12-13 09:27:03.937929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.093 [2024-12-13 09:27:03.937948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.093 [2024-12-13 09:27:03.964195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.093 [2024-12-13 09:27:03.964262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.093 [2024-12-13 09:27:03.964281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:03.983296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:03.983524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:03.983564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.001971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.002033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.002054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.020413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.020482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.020501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.038714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.038773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.038795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.055986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.056029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.056066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.073351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.073416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.073435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.090537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.090747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.090782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.108035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.108095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.108116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.125279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.125535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.125560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.143577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.143797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.143827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.161233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.161305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.161329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.178530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.178744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.178767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.196014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.196074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.196095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.213226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.213306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.352 [2024-12-13 09:27:04.213348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.352 [2024-12-13 09:27:04.230470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.352 [2024-12-13 09:27:04.230535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.353 [2024-12-13 09:27:04.230554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.249617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.249680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.249700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.267397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.267460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.267479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.284707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.284773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.284791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.301918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.301978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.301999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.319347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.319439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.319458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.336540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.336609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.336627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.353908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.353968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.353989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.371137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.371414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.371438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.388647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.388729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.388748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.405769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.405828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.405849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.422924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.423149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.423188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.440407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.440627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.440651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.458067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.458127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.458145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.612 [2024-12-13 09:27:04.475645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.612 [2024-12-13 09:27:04.475706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.612 [2024-12-13 09:27:04.475724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.613 [2024-12-13 09:27:04.493210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.613 [2024-12-13 09:27:04.493452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.613 [2024-12-13 09:27:04.493476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.512091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.512151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.512169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.529368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.529422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.529441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.546761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.546977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.547003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.564136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.564196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.564214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.581411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.581470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.581488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.598533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.598741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.598764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.616008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.616053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.616087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.633259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.633346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.633366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.650540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.650598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.650616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.667742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.667801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.667818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.684948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.685007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.685024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.702122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.702363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.702405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.719966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.720025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.720042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.737169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.737399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.737423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:10.872 [2024-12-13 09:27:04.754896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:10.872 [2024-12-13 09:27:04.755112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.872 [2024-12-13 09:27:04.755137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.132 13915.50 IOPS, 54.36 MiB/s [2024-12-13T09:27:05.022Z] [2024-12-13 09:27:04.774998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:11.132 [2024-12-13 09:27:04.775061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.132 [2024-12-13 09:27:04.775080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.132 00:25:11.132 Latency(us) 00:25:11.132 [2024-12-13T09:27:05.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.132 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:11.132 nvme0n1 : 2.01 13946.58 54.48 0.00 0.00 9169.76 8340.95 35985.22 00:25:11.132 [2024-12-13T09:27:05.022Z] =================================================================================================================== 00:25:11.132 [2024-12-13T09:27:05.022Z] Total : 13946.58 54.48 0.00 0.00 9169.76 8340.95 35985.22 00:25:11.132 { 00:25:11.132 "results": [ 00:25:11.132 { 00:25:11.132 "job": "nvme0n1", 00:25:11.132 "core_mask": "0x2", 00:25:11.132 "workload": "randread", 00:25:11.132 "status": "finished", 00:25:11.132 "queue_depth": 128, 00:25:11.132 "io_size": 4096, 00:25:11.132 "runtime": 2.013827, 00:25:11.132 "iops": 13946.580316978569, 00:25:11.132 "mibps": 54.478829363197534, 00:25:11.132 "io_failed": 0, 00:25:11.132 "io_timeout": 0, 00:25:11.132 "avg_latency_us": 9169.760141901821, 00:25:11.132 "min_latency_us": 8340.945454545454, 00:25:11.132 "max_latency_us": 35985.22181818182 00:25:11.132 } 00:25:11.132 ], 00:25:11.132 "core_count": 1 00:25:11.132 } 00:25:11.132 09:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:11.132 09:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:11.132 09:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:11.132 09:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:11.132 | .driver_specific 00:25:11.132 | .nvme_error 00:25:11.132 | .status_code 00:25:11.132 | .command_transient_transport_error' 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88285 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 88285 ']' 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 88285 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88285 00:25:11.391 killing process with pid 88285 00:25:11.391 Received shutdown signal, test time was about 2.000000 seconds 00:25:11.391 00:25:11.391 Latency(us) 00:25:11.391 [2024-12-13T09:27:05.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.391 [2024-12-13T09:27:05.281Z] =================================================================================================================== 00:25:11.391 [2024-12-13T09:27:05.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88285' 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 88285 00:25:11.391 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 88285 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88352 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88352 /var/tmp/bperf.sock 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 88352 ']' 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:12.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.328 09:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.328 [2024-12-13 09:27:06.015855] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:12.328 [2024-12-13 09:27:06.016311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88352 ] 00:25:12.328 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:12.328 Zero copy mechanism will not be used. 00:25:12.328 [2024-12-13 09:27:06.194436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.587 [2024-12-13 09:27:06.276815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.587 [2024-12-13 09:27:06.431468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:13.154 09:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.154 09:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:13.154 09:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:13.154 09:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:13.413 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:13.413 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.413 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.413 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.413 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.413 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.672 nvme0n1 00:25:13.672 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:13.672 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.672 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.672 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.672 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:13.672 09:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.932 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:13.932 Zero copy mechanism will not be used. 00:25:13.932 Running I/O for 2 seconds... 00:25:13.932 [2024-12-13 09:27:07.600474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.601030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.601161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.606161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.606485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.606888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.611966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.612287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.612720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.617643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.617931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.618056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.622936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.623214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.623384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.628358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.628649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.628893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.633814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.634112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.634386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.639515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.639818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.640096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.645002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.645308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.645670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.650493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.650606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.650867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.655727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.655991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.932 [2024-12-13 09:27:07.656106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.932 [2024-12-13 09:27:07.661061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.932 [2024-12-13 09:27:07.661381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.661490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.666297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.666579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.666824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.671810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.672098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.672372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.677203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.677532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.677821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.682836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.683188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.683490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.688548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.688833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.689150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.693960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.694233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.694508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.699571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.699889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.700114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.705150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.705488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.705807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.710639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.711001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.711126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.715891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.716004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.716093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.721016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.721170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.721276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.725976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.726256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.726419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.731427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.731719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.731954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.736966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.737236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.737502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.742615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.742950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.743266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.748158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.748487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.748793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.753522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.753851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.754207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.759078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.759295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.759547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.764830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.765130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.765406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.770212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.770539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.770866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.775786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.775910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.776117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.780908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.781173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.781308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.786194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.786521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.786660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.791599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.791896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.792127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.797149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.797443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.797536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.802310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.802633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.933 [2024-12-13 09:27:07.803035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:13.933 [2024-12-13 09:27:07.807716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.933 [2024-12-13 09:27:07.807997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.934 [2024-12-13 09:27:07.808245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:13.934 [2024-12-13 09:27:07.813098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.934 [2024-12-13 09:27:07.813405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.934 [2024-12-13 09:27:07.813736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:13.934 [2024-12-13 09:27:07.818719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:13.934 [2024-12-13 09:27:07.819078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.934 [2024-12-13 09:27:07.819358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.824378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.824683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.824832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.829735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.829966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.830089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.834715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.835021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.835151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.839872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.840165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.840423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.845133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.845457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.845754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.850472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.850509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.850529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.855171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.855497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.855628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.860180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.860462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.860553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.865199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.865494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.865614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.870198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.870510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.870742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.875653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.875934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.876176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.881581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.881905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.882135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.887662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.887998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.888331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.893916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.894201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.894572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.900267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.900517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.900552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.905761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.905821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.194 [2024-12-13 09:27:07.905842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.194 [2024-12-13 09:27:07.910632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.194 [2024-12-13 09:27:07.910746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.910765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.915723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.915938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.915963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.920974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.921034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.921055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.925963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.926022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.926043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.930934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.931153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.931194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.936073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.936141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.936160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.941010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.941070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.941090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.945877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.946090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.946122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.951037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.951096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.951117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.955791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.955858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.955878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.960606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.960666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.960688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.965362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.965421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.965442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.969933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.970015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.970033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.974655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.974736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.974754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.979335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.979403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.979424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.983894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.983952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.983975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.988534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.988599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.988617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.993110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.993174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.993192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:07.997775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:07.997834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:07.997854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:08.002352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:08.002411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:08.002432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:08.006802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:08.007060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:08.007087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:08.011873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:08.011932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:08.011955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:08.016487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:08.016545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:08.016565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:08.020965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:08.021029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:08.021047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:08.025682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:08.025746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:08.025764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:08.030214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:08.030447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:08.030477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:08.035158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:08.035264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.195 [2024-12-13 09:27:08.035285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.195 [2024-12-13 09:27:08.039730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.195 [2024-12-13 09:27:08.039796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.196 [2024-12-13 09:27:08.039814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.196 [2024-12-13 09:27:08.044288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.196 [2024-12-13 09:27:08.044350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.196 [2024-12-13 09:27:08.044367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.196 [2024-12-13 09:27:08.048766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.196 [2024-12-13 09:27:08.048825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.196 [2024-12-13 09:27:08.048847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.196 [2024-12-13 09:27:08.053335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.196 [2024-12-13 09:27:08.053395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.196 [2024-12-13 09:27:08.053418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.196 [2024-12-13 09:27:08.057756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.196 [2024-12-13 09:27:08.057820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.196 [2024-12-13 09:27:08.057838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.196 [2024-12-13 09:27:08.062246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.196 [2024-12-13 09:27:08.062331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.196 [2024-12-13 09:27:08.062354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.196 [2024-12-13 09:27:08.066804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.196 [2024-12-13 09:27:08.066884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.196 [2024-12-13 09:27:08.066929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.196 [2024-12-13 09:27:08.071545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.196 [2024-12-13 09:27:08.071611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.196 [2024-12-13 09:27:08.071630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.196 [2024-12-13 09:27:08.076026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.196 [2024-12-13 09:27:08.076093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.196 [2024-12-13 09:27:08.076112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.081208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.081456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.081492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.086263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.086368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.086393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.091164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.091275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.091294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.095718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.095781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.095799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.100313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.100370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.100391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.104851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.104909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.104930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.109536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.109601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.109619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.114009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.114076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.114094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.118722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.118781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.118804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.123371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.123428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.123448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.456 [2024-12-13 09:27:08.127938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.456 [2024-12-13 09:27:08.128002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.456 [2024-12-13 09:27:08.128020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.132577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.132635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.132656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.137035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.137094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.137114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.141691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.141757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.141775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.146336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.146401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.146420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.150874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.151111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.151142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.155911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.155970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.155991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.160467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.160531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.160549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.165009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.165074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.165092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.169632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.169702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.169740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.174136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.174195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.174215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.178689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.178754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.178772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.183465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.183533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.183561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.187980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.188039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.188059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.192511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.192574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.192592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.196999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.197064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.197082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.201609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.201670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.201690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.206101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.206161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.206181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.210612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.210677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.210695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.215245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.215335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.215355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.219886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.220094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.220122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.224802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.224861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.224884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.229415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.229473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.229489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.233890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.233948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.233965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.238431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.457 [2024-12-13 09:27:08.238489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.457 [2024-12-13 09:27:08.238506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.457 [2024-12-13 09:27:08.242918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.242973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.242991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.247518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.247576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.247593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.251992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.252051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.252068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.256700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.256759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.256776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.261166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.261404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.261429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.266040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.266100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.266117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.270640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.270698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.270715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.275263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.275348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.275369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.279777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.279835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.279852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.284349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.284407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.284424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.288842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.288902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.288919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.293473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.293533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.293550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.297960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.298018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.298036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.302522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.302580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.302597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.307094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.307156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.307188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.311771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.311977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.312000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.316678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.316736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.316754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.321226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.321284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.321331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.325777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.325835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.325852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.330238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.330326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.330362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.334938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.335143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.335168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.458 [2024-12-13 09:27:08.340095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.458 [2024-12-13 09:27:08.340156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.458 [2024-12-13 09:27:08.340204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.345311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.345589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.345624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.350480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.350540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.350558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.355088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.355151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.355184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.359781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.359840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.359857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.364379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.364437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.364453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.368908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.368968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.368985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.373439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.373495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.373512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.377985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.378043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.378061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.382713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.382787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.382803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.387431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.387489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.387506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.391883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.391941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.391958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.396486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.396543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.396561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.401000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.401057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.401074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.405652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.405711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.405729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.410097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.410155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.410172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.414761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.414819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.414836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.419648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.419854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.419877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.424449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.424507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.424524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.428971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.720 [2024-12-13 09:27:08.429029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.720 [2024-12-13 09:27:08.429047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.720 [2024-12-13 09:27:08.433594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.433650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.433666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.438143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.438200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.438217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.442791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.442990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.443017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.447824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.448047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.448181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.452992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.453208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.453369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.458355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.458578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.458782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.463550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.463764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.463893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.468738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.468944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.469106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.473850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.474076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.474209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.478984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.479251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.479440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.484242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.484481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.484593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.489246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.489501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.489647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.494343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.494567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.494770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.499654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.499880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.500012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.504889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.505103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.505234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.509942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.510166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.510345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.515298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.515548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.515736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.520362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.520565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.520588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.525207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.525442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.525573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.530335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.530553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.530686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.535516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.535735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.535867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.540677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.540894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.541005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.545735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.545961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.546093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.550836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.551112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.551279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.556185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.556419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.556644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.561290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.561513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.561646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.566228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.721 [2024-12-13 09:27:08.566483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.721 [2024-12-13 09:27:08.566617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.721 [2024-12-13 09:27:08.571522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.722 [2024-12-13 09:27:08.571747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.722 [2024-12-13 09:27:08.571887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.722 [2024-12-13 09:27:08.576567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.722 [2024-12-13 09:27:08.576782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.722 [2024-12-13 09:27:08.576916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.722 [2024-12-13 09:27:08.581437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.722 [2024-12-13 09:27:08.581640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.722 [2024-12-13 09:27:08.581663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.722 [2024-12-13 09:27:08.586116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.722 [2024-12-13 09:27:08.586366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.722 [2024-12-13 09:27:08.586555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.722 [2024-12-13 09:27:08.591310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.722 [2024-12-13 09:27:08.591535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.722 [2024-12-13 09:27:08.591561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.722 6231.00 IOPS, 778.88 MiB/s [2024-12-13T09:27:08.612Z] [2024-12-13 09:27:08.597570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.722 [2024-12-13 09:27:08.597624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.722 [2024-12-13 09:27:08.597642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.722 [2024-12-13 09:27:08.602036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.722 [2024-12-13 09:27:08.602095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.722 [2024-12-13 09:27:08.602129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.607562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.607628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.607648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.613011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.613214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.613240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.618561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.618641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.618661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.623897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.623961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.623979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.628725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.628936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.628959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.633637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.633693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.633709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.638207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.638267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.638284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.642809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.642888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.642908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.647523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.647580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.647597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.652063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.652122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.652140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.656739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.656797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.656814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.661288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.661345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.661363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.665684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.665742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.665759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.670190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.670249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.670265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.674751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.674989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.675015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.679694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.679752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.679769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.684269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.684337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.684356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.688794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.688852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.688868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.693436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.693494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.693511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.697917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.697976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.697993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.995 [2024-12-13 09:27:08.702461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.995 [2024-12-13 09:27:08.702519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.995 [2024-12-13 09:27:08.702537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.707248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.707334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.707352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.711873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.711931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.711948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.716434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.716492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.716510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.721033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.721245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.721268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.725787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.725846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.725863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.730337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.730396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.730413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.734791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.734872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.734905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.739458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.739516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.739533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.743949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.744007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.744024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.748537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.748595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.748612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.753039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.753098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.753115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.757580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.757639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.757657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.762054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.762113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.762130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.766630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.766690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.766722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.771146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.771222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.771239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.775761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.775971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.775994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.780463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.780522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.780539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.784979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.785038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.785055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.789830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.789871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.789888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.794918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.795134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.795188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.800273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.800543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.800695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.805738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.805983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.806236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.811864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.812082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.812220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.817605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.817856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.817992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.823538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.823750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.823937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.829120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.829374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.829517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.834472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.996 [2024-12-13 09:27:08.834717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.996 [2024-12-13 09:27:08.834892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.996 [2024-12-13 09:27:08.840148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.997 [2024-12-13 09:27:08.840190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.997 [2024-12-13 09:27:08.840207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.997 [2024-12-13 09:27:08.844849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.997 [2024-12-13 09:27:08.845045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.997 [2024-12-13 09:27:08.845068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.997 [2024-12-13 09:27:08.849726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.997 [2024-12-13 09:27:08.849768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.997 [2024-12-13 09:27:08.849785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.997 [2024-12-13 09:27:08.854511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.997 [2024-12-13 09:27:08.854552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.997 [2024-12-13 09:27:08.854568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:14.997 [2024-12-13 09:27:08.859256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.997 [2024-12-13 09:27:08.859327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.997 [2024-12-13 09:27:08.859346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:14.997 [2024-12-13 09:27:08.863908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.997 [2024-12-13 09:27:08.864105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.997 [2024-12-13 09:27:08.864129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:14.997 [2024-12-13 09:27:08.868901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.997 [2024-12-13 09:27:08.868942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.997 [2024-12-13 09:27:08.868958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:14.997 [2024-12-13 09:27:08.874315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:14.997 [2024-12-13 09:27:08.874371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.997 [2024-12-13 09:27:08.874389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.879797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.879843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.879862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.885003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.885046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.885062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.889923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.889965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.889982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.894553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.894594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.894611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.899415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.899626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.899783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.904988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.905208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.905504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.910907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.910957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.910976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.916094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.916154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.916171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.921599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.921639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.921657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.926822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.926890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.926910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.932045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.932102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.932120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.937359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.937432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.937450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.942589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.942635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.942682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.947775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.947832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.947849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.952823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.952880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.952897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.957865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.957923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.957940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.269 [2024-12-13 09:27:08.962928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.269 [2024-12-13 09:27:08.962984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.269 [2024-12-13 09:27:08.963001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:08.967700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:08.967770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:08.967786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:08.972220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:08.972274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:08.972300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:08.976825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:08.976879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:08.976896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:08.981478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:08.981533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:08.981549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:08.986105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:08.986160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:08.986177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:08.991149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:08.991236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:08.991266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:08.995831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:08.995886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:08.995901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.000548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.000603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.000619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.005183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.005239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.005254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.009893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.009948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.009964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.014648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.014705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.014722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.019464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.019506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.019522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.024155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.024209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.024225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.028731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.028786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.028802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.033419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.033474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.033490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.038124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.038196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.038212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.043128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.043182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.043199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.047753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.047807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.047823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.052533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.052587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.052603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.057145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.057199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.057215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.061806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.061861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.061878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.066420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.066457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.066473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.071343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.071412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.071429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.075982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.076038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.076054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.080592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.080646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.080662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.085304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.085359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.085375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.090162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.090206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.090223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.095658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.095717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.095734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.101055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.101096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.101113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.106010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.106065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.106081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.110521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.110574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.110590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.115241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.115316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.115334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.119909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.119962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.119978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.124468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.124521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.124537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.128987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.129057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.129073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.133592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.133633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.133649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.138157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.138210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.138226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.142668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.142723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.142739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.147399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.147440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.147456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.270 [2024-12-13 09:27:09.151913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.270 [2024-12-13 09:27:09.151967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.270 [2024-12-13 09:27:09.151982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.157059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.530 [2024-12-13 09:27:09.157116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.530 [2024-12-13 09:27:09.157133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.162017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.530 [2024-12-13 09:27:09.162132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.530 [2024-12-13 09:27:09.162148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.166600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.530 [2024-12-13 09:27:09.166654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.530 [2024-12-13 09:27:09.166670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.171191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.530 [2024-12-13 09:27:09.171289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.530 [2024-12-13 09:27:09.171326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.175792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.530 [2024-12-13 09:27:09.175845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.530 [2024-12-13 09:27:09.175861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.180502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.530 [2024-12-13 09:27:09.180556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.530 [2024-12-13 09:27:09.180571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.185065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.530 [2024-12-13 09:27:09.185119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.530 [2024-12-13 09:27:09.185135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.189724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.530 [2024-12-13 09:27:09.189779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.530 [2024-12-13 09:27:09.189795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.194229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.530 [2024-12-13 09:27:09.194283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.530 [2024-12-13 09:27:09.194311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.198641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.530 [2024-12-13 09:27:09.198695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.530 [2024-12-13 09:27:09.198711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.530 [2024-12-13 09:27:09.203123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.203194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.203211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.207727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.207781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.207797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.212183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.212236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.212252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.216759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.216813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.216829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.221292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.221344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.221360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.225783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.225838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.225854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.230198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.230252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.230268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.234637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.234690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.234706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.239209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.239279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.239316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.243722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.243776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.243793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.248181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.248235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.248251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.252768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.252823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.252838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.257296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.257349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.257364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.261768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.261823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.261839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.266191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.266245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.266260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.270579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.270634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.270649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.275098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.275153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.275181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.279617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.279686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.279701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.284196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.284250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.284266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.288724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.288778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.288794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.293136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.293189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.293206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.297746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.297801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.297816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.302261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.302324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.302340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.306798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.306876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.306895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.311383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.311438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.311453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.315982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.316037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.316052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.320486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.320540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.320555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.325033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.325087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.325103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.329556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.329610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.329625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.334104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.334158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.334173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.338567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.338621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.338636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.343060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.343102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.343118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.347552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.347607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.347624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.352072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.352127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.352142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.356581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.356635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.356650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.361111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.361165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.531 [2024-12-13 09:27:09.361180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.531 [2024-12-13 09:27:09.365599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.531 [2024-12-13 09:27:09.365653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.365668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.370065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.370120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.370136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.374594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.374649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.374664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.379074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.379117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.379134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.383653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.383723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.383739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.388357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.388412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.388429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.392905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.392960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.392977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.397604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.397659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.397675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.402240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.402306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.402324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.406928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.407013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.407043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.411733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.411788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.411804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.532 [2024-12-13 09:27:09.416846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.532 [2024-12-13 09:27:09.416902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.532 [2024-12-13 09:27:09.416918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.791 [2024-12-13 09:27:09.421694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.791 [2024-12-13 09:27:09.421747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.421763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.426472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.426525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.426541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.430924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.430982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.430999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.435659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.435715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.435732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.440192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.440246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.440261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.444705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.444759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.444775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.449219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.449273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.449289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.453720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.453775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.453791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.458186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.458239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.458255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.462669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.462723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.462738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.467259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.467335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.467352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.471746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.471800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.471815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.476385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.476438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.476454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.480898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.480953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.480968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.485422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.485476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.485492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.489955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.490009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.490025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.494497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.494551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.494566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.499075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.499118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.499134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.503690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.503744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.503759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.508314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.508366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.508382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.512861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.512914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.512930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.517444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.517499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.517515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.521896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.521949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.521965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.526357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.526411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.526427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.530761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.530815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-12-13 09:27:09.530831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.792 [2024-12-13 09:27:09.535446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.792 [2024-12-13 09:27:09.535500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.535516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.539900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.539953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.539969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.544446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.544500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.544516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.548899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.548953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.548969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.553459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.553513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.553529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.557974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.558028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.558044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.562502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.562556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.562572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.567118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.567200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.567216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.571677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.571746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.571762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.576133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.576187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.576203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.580680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.580734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.580750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.585238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.585301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.585319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.589735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.589790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.589806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.793 [2024-12-13 09:27:09.594201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:15.793 [2024-12-13 09:27:09.594255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.793 [2024-12-13 09:27:09.594271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.793 6393.50 IOPS, 799.19 MiB/s 00:25:15.793 Latency(us) 00:25:15.793 [2024-12-13T09:27:09.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.793 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:15.793 nvme0n1 : 2.00 6392.42 799.05 0.00 0.00 2499.18 2010.76 8757.99 00:25:15.793 [2024-12-13T09:27:09.683Z] =================================================================================================================== 00:25:15.793 [2024-12-13T09:27:09.683Z] Total : 6392.42 799.05 0.00 0.00 2499.18 2010.76 8757.99 00:25:15.793 { 00:25:15.793 "results": [ 00:25:15.793 { 00:25:15.793 "job": "nvme0n1", 00:25:15.793 "core_mask": "0x2", 00:25:15.793 "workload": "randread", 00:25:15.793 "status": "finished", 00:25:15.793 "queue_depth": 16, 00:25:15.793 "io_size": 131072, 00:25:15.793 "runtime": 2.00284, 00:25:15.793 "iops": 6392.4227596812525, 00:25:15.793 "mibps": 799.0528449601566, 00:25:15.793 "io_failed": 0, 00:25:15.793 "io_timeout": 0, 00:25:15.793 "avg_latency_us": 2499.175710238367, 00:25:15.793 "min_latency_us": 2010.7636363636364, 00:25:15.793 "max_latency_us": 8757.992727272727 00:25:15.793 } 00:25:15.793 ], 00:25:15.793 "core_count": 1 00:25:15.793 } 00:25:15.793 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:15.793 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:15.793 | .driver_specific 00:25:15.793 | .nvme_error 00:25:15.793 | .status_code 00:25:15.793 | .command_transient_transport_error' 00:25:15.793 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:15.793 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:16.052 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 413 > 0 )) 00:25:16.052 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88352 00:25:16.052 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 88352 ']' 00:25:16.052 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 88352 00:25:16.052 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:16.052 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.052 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88352 00:25:16.311 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:16.311 killing process with pid 88352 00:25:16.311 Received shutdown signal, test time was about 2.000000 seconds 00:25:16.311 00:25:16.311 Latency(us) 00:25:16.311 [2024-12-13T09:27:10.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.311 [2024-12-13T09:27:10.201Z] =================================================================================================================== 00:25:16.311 [2024-12-13T09:27:10.201Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.311 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:16.311 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88352' 00:25:16.311 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 88352 00:25:16.311 09:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 88352 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88413 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88413 /var/tmp/bperf.sock 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 88413 ']' 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:17.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.248 09:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.248 [2024-12-13 09:27:10.892412] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:17.248 [2024-12-13 09:27:10.892580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88413 ] 00:25:17.248 [2024-12-13 09:27:11.065623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.527 [2024-12-13 09:27:11.148175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.527 [2024-12-13 09:27:11.291813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:18.095 09:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.095 09:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:18.095 09:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.095 09:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.354 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:18.354 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.354 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.354 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.354 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.354 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.612 nvme0n1 00:25:18.613 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:18.613 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.613 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.613 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.613 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:18.613 09:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:18.613 Running I/O for 2 seconds... 00:25:18.613 [2024-12-13 09:27:12.460319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:25:18.613 [2024-12-13 09:27:12.462033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.613 [2024-12-13 09:27:12.462098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.613 [2024-12-13 09:27:12.476992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:25:18.613 [2024-12-13 09:27:12.479015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.613 [2024-12-13 09:27:12.479226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.613 [2024-12-13 09:27:12.493814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:25:18.613 [2024-12-13 09:27:12.495806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.613 [2024-12-13 09:27:12.496014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.512233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8a50 00:25:18.872 [2024-12-13 09:27:12.514087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.514317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.529106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:25:18.872 [2024-12-13 09:27:12.530950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.531159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.546230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9b30 00:25:18.872 [2024-12-13 09:27:12.548146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.548362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.562978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:25:18.872 [2024-12-13 09:27:12.564757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.564970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.579832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:25:18.872 [2024-12-13 09:27:12.581563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.581772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.596499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:25:18.872 [2024-12-13 09:27:12.598200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.598424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.613781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:25:18.872 [2024-12-13 09:27:12.615665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.615709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.630941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:25:18.872 [2024-12-13 09:27:12.632602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.632667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.649174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:25:18.872 [2024-12-13 09:27:12.650946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.651001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.667716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:25:18.872 [2024-12-13 09:27:12.669215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.669278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.684746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:25:18.872 [2024-12-13 09:27:12.686350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.686408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.702066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:25:18.872 [2024-12-13 09:27:12.703803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.703860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.719701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:25:18.872 [2024-12-13 09:27:12.721144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.721349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:18.872 [2024-12-13 09:27:12.744098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:25:18.872 [2024-12-13 09:27:12.746930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.872 [2024-12-13 09:27:12.746976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.762240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:25:19.132 [2024-12-13 09:27:12.765411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.765454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.780119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:25:19.132 [2024-12-13 09:27:12.782808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.783032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.797719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:25:19.132 [2024-12-13 09:27:12.800461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.800527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.814570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:25:19.132 [2024-12-13 09:27:12.817687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.817737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.831880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:25:19.132 [2024-12-13 09:27:12.834506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.834570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.849012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:25:19.132 [2024-12-13 09:27:12.851818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.851856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.866217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:25:19.132 [2024-12-13 09:27:12.868933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.868975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.883753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:25:19.132 [2024-12-13 09:27:12.886409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.886603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.901119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:25:19.132 [2024-12-13 09:27:12.903781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.903844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.917478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9b30 00:25:19.132 [2024-12-13 09:27:12.920206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.920264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.933913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:25:19.132 [2024-12-13 09:27:12.936440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.936501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.950032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8a50 00:25:19.132 [2024-12-13 09:27:12.952640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.952712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.966413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:25:19.132 [2024-12-13 09:27:12.968928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.968972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.982799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:25:19.132 [2024-12-13 09:27:12.985304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:12.985395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:12.999810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:25:19.132 [2024-12-13 09:27:13.002481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.132 [2024-12-13 09:27:13.002546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:19.132 [2024-12-13 09:27:13.019477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:25:19.392 [2024-12-13 09:27:13.022231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.022319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.037399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6020 00:25:19.392 [2024-12-13 09:27:13.039822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.039882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.053897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f57b0 00:25:19.392 [2024-12-13 09:27:13.056297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.056344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.070048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:25:19.392 [2024-12-13 09:27:13.072481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.072524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.086331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:25:19.392 [2024-12-13 09:27:13.088618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.088681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.102551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:25:19.392 [2024-12-13 09:27:13.104881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.104940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.118742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:25:19.392 [2024-12-13 09:27:13.121015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.121069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.134997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2d80 00:25:19.392 [2024-12-13 09:27:13.137522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.137567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.151642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:25:19.392 [2024-12-13 09:27:13.153855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.153895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.167849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:25:19.392 [2024-12-13 09:27:13.170080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.170140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.184515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:25:19.392 [2024-12-13 09:27:13.186953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.187017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.200966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:25:19.392 [2024-12-13 09:27:13.203215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.203285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.217446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:25:19.392 [2024-12-13 09:27:13.219611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.219652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.233511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:25:19.392 [2024-12-13 09:27:13.235665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.235758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.249746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:25:19.392 [2024-12-13 09:27:13.251891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.251951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:19.392 [2024-12-13 09:27:13.266009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:25:19.392 [2024-12-13 09:27:13.268174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.392 [2024-12-13 09:27:13.268214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:19.651 [2024-12-13 09:27:13.283799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:25:19.651 [2024-12-13 09:27:13.285975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.286016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.300441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:25:19.652 [2024-12-13 09:27:13.302470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.302661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.316939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:25:19.652 [2024-12-13 09:27:13.319029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.319081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.333103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:25:19.652 [2024-12-13 09:27:13.335225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.335327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.349290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:25:19.652 [2024-12-13 09:27:13.351326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.351376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.365410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:25:19.652 [2024-12-13 09:27:13.367474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.367529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.381750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:25:19.652 [2024-12-13 09:27:13.383789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.383851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.398036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:25:19.652 [2024-12-13 09:27:13.400044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.400104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.414552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:25:19.652 [2024-12-13 09:27:13.416810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.416869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.431044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:25:19.652 [2024-12-13 09:27:13.433022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.433077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:19.652 14929.00 IOPS, 58.32 MiB/s [2024-12-13T09:27:13.542Z] [2024-12-13 09:27:13.447480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:25:19.652 [2024-12-13 09:27:13.449292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.449372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.463601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e84c0 00:25:19.652 [2024-12-13 09:27:13.465454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.465507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.480306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:25:19.652 [2024-12-13 09:27:13.482084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.482144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.496624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:25:19.652 [2024-12-13 09:27:13.498406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.498468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.512994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:25:19.652 [2024-12-13 09:27:13.514808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.515015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:19.652 [2024-12-13 09:27:13.529692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:25:19.652 [2024-12-13 09:27:13.531557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.652 [2024-12-13 09:27:13.531613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.547485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:25:19.912 [2024-12-13 09:27:13.549178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.549234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.564123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:25:19.912 [2024-12-13 09:27:13.565935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.565998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.580530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:25:19.912 [2024-12-13 09:27:13.582164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.582215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.597029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:25:19.912 [2024-12-13 09:27:13.598713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.598769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.613309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:25:19.912 [2024-12-13 09:27:13.614946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.614990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.629512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:25:19.912 [2024-12-13 09:27:13.631126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.631186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.646353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:25:19.912 [2024-12-13 09:27:13.648128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.648192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.663571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:25:19.912 [2024-12-13 09:27:13.665436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.665499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.680060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:25:19.912 [2024-12-13 09:27:13.681716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.681787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.696471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:25:19.912 [2024-12-13 09:27:13.698248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.698308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.713487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:25:19.912 [2024-12-13 09:27:13.715002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.715068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.729871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:25:19.912 [2024-12-13 09:27:13.731474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.731536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.746122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:25:19.912 [2024-12-13 09:27:13.747818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.747877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.762446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:25:19.912 [2024-12-13 09:27:13.764230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.764285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:19.912 [2024-12-13 09:27:13.778805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:25:19.912 [2024-12-13 09:27:13.780309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.912 [2024-12-13 09:27:13.780376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:20.171 [2024-12-13 09:27:13.802523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:25:20.171 [2024-12-13 09:27:13.805797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.171 [2024-12-13 09:27:13.805839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:20.171 [2024-12-13 09:27:13.820017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:25:20.171 [2024-12-13 09:27:13.822698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.171 [2024-12-13 09:27:13.822924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:20.171 [2024-12-13 09:27:13.836687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:25:20.171 [2024-12-13 09:27:13.839607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:13.839662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:13.853116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:25:20.172 [2024-12-13 09:27:13.855939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:13.856002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:13.869450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:25:20.172 [2024-12-13 09:27:13.872377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:13.872440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:13.886397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:25:20.172 [2024-12-13 09:27:13.889203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:13.889260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:13.904863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:25:20.172 [2024-12-13 09:27:13.907855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:13.907914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:13.922964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:25:20.172 [2024-12-13 09:27:13.925668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:13.925709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:13.940314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:25:20.172 [2024-12-13 09:27:13.942905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:13.943106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:13.957863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:25:20.172 [2024-12-13 09:27:13.960510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:13.960706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:13.974986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:25:20.172 [2024-12-13 09:27:13.977733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:13.977781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:13.992332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:25:20.172 [2024-12-13 09:27:13.994942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:13.995008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:14.009549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:25:20.172 [2024-12-13 09:27:14.012157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:14.012214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:14.028041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:25:20.172 [2024-12-13 09:27:14.031028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:14.031076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:20.172 [2024-12-13 09:27:14.046964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:25:20.172 [2024-12-13 09:27:14.049736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.172 [2024-12-13 09:27:14.049945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:20.431 [2024-12-13 09:27:14.065736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:25:20.431 [2024-12-13 09:27:14.068296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.431 [2024-12-13 09:27:14.068337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:20.431 [2024-12-13 09:27:14.083083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:25:20.431 [2024-12-13 09:27:14.085573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.431 [2024-12-13 09:27:14.085634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.431 [2024-12-13 09:27:14.100599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:25:20.431 [2024-12-13 09:27:14.103042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.431 [2024-12-13 09:27:14.103097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:20.431 [2024-12-13 09:27:14.117632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:25:20.431 [2024-12-13 09:27:14.120269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.431 [2024-12-13 09:27:14.120339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:20.431 [2024-12-13 09:27:14.134746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:25:20.431 [2024-12-13 09:27:14.137387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.431 [2024-12-13 09:27:14.137456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:20.431 [2024-12-13 09:27:14.152828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e84c0 00:25:20.431 [2024-12-13 09:27:14.155187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.431 [2024-12-13 09:27:14.155244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:20.431 [2024-12-13 09:27:14.169213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:25:20.431 [2024-12-13 09:27:14.171714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.431 [2024-12-13 09:27:14.171769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:20.431 [2024-12-13 09:27:14.185938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:25:20.431 [2024-12-13 09:27:14.188505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.431 [2024-12-13 09:27:14.188546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:20.431 [2024-12-13 09:27:14.202515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:25:20.431 [2024-12-13 09:27:14.204747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.432 [2024-12-13 09:27:14.204807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:20.432 [2024-12-13 09:27:14.218585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:25:20.432 [2024-12-13 09:27:14.220803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.432 [2024-12-13 09:27:14.220858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:20.432 [2024-12-13 09:27:14.234775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:25:20.432 [2024-12-13 09:27:14.237256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.432 [2024-12-13 09:27:14.237340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:20.432 [2024-12-13 09:27:14.251291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:25:20.432 [2024-12-13 09:27:14.253480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.432 [2024-12-13 09:27:14.253536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:20.432 [2024-12-13 09:27:14.267463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:25:20.432 [2024-12-13 09:27:14.269604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.432 [2024-12-13 09:27:14.269666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:20.432 [2024-12-13 09:27:14.283631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:25:20.432 [2024-12-13 09:27:14.285819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.432 [2024-12-13 09:27:14.285858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:20.432 [2024-12-13 09:27:14.300026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:25:20.432 [2024-12-13 09:27:14.302278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.432 [2024-12-13 09:27:14.302341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:20.432 [2024-12-13 09:27:14.316593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:25:20.432 [2024-12-13 09:27:14.319010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.432 [2024-12-13 09:27:14.319066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:20.691 [2024-12-13 09:27:14.333863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:25:20.691 [2024-12-13 09:27:14.336308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.691 [2024-12-13 09:27:14.336399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:20.691 [2024-12-13 09:27:14.350525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:25:20.691 [2024-12-13 09:27:14.352595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.691 [2024-12-13 09:27:14.352657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:20.691 [2024-12-13 09:27:14.366632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:25:20.691 [2024-12-13 09:27:14.369004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.691 [2024-12-13 09:27:14.369044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:20.691 [2024-12-13 09:27:14.383122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:25:20.691 [2024-12-13 09:27:14.385256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.691 [2024-12-13 09:27:14.385324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:20.691 [2024-12-13 09:27:14.399422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:25:20.691 [2024-12-13 09:27:14.401402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.691 [2024-12-13 09:27:14.401468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:20.691 [2024-12-13 09:27:14.415581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:25:20.691 [2024-12-13 09:27:14.417530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.691 [2024-12-13 09:27:14.417591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:20.691 [2024-12-13 09:27:14.432151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:25:20.691 [2024-12-13 09:27:14.434199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.691 [2024-12-13 09:27:14.434246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:20.691 14991.50 IOPS, 58.56 MiB/s [2024-12-13T09:27:14.581Z] [2024-12-13 09:27:14.449273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e84c0 00:25:20.691 [2024-12-13 09:27:14.449505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.691 [2024-12-13 09:27:14.449538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:20.691 00:25:20.691 Latency(us) 00:25:20.691 [2024-12-13T09:27:14.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.691 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:20.691 nvme0n1 : 2.01 15005.48 58.62 0.00 0.00 8512.34 3142.75 33125.47 00:25:20.691 [2024-12-13T09:27:14.581Z] =================================================================================================================== 00:25:20.691 [2024-12-13T09:27:14.581Z] Total : 15005.48 58.62 0.00 0.00 8512.34 3142.75 33125.47 00:25:20.691 { 00:25:20.691 "results": [ 00:25:20.691 { 00:25:20.691 "job": "nvme0n1", 00:25:20.691 "core_mask": "0x2", 00:25:20.691 "workload": "randwrite", 00:25:20.691 "status": "finished", 00:25:20.691 "queue_depth": 128, 00:25:20.691 "io_size": 4096, 00:25:20.691 "runtime": 2.006667, 00:25:20.691 "iops": 15005.479234970227, 00:25:20.691 "mibps": 58.61515326160245, 00:25:20.691 "io_failed": 0, 00:25:20.691 "io_timeout": 0, 00:25:20.691 "avg_latency_us": 8512.338088708144, 00:25:20.691 "min_latency_us": 3142.7490909090907, 00:25:20.691 "max_latency_us": 33125.46909090909 00:25:20.691 } 00:25:20.691 ], 00:25:20.691 "core_count": 1 00:25:20.691 } 00:25:20.691 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:20.691 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:20.691 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:20.691 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:20.691 | .driver_specific 00:25:20.691 | .nvme_error 00:25:20.691 | .status_code 00:25:20.691 | .command_transient_transport_error' 00:25:20.950 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:25:20.950 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88413 00:25:20.950 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 88413 ']' 00:25:20.950 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 88413 00:25:20.950 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:20.950 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.950 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88413 00:25:20.950 killing process with pid 88413 00:25:20.950 Received shutdown signal, test time was about 2.000000 seconds 00:25:20.950 00:25:20.950 Latency(us) 00:25:20.950 [2024-12-13T09:27:14.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.950 [2024-12-13T09:27:14.841Z] =================================================================================================================== 00:25:20.951 [2024-12-13T09:27:14.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.951 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:20.951 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:20.951 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88413' 00:25:20.951 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 88413 00:25:20.951 09:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 88413 00:25:21.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88480 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88480 /var/tmp/bperf.sock 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 88480 ']' 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.886 09:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.886 [2024-12-13 09:27:15.674624] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:21.886 [2024-12-13 09:27:15.675089] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:25:21.886 Zero copy mechanism will not be used. 00:25:21.886 llocations --file-prefix=spdk_pid88480 ] 00:25:22.145 [2024-12-13 09:27:15.851628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.145 [2024-12-13 09:27:15.932261] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.404 [2024-12-13 09:27:16.083570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:22.663 09:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.663 09:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:22.922 09:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.922 09:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:23.181 09:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:23.181 09:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.181 09:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:23.181 09:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.181 09:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.181 09:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.440 nvme0n1 00:25:23.440 09:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:23.440 09:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.440 09:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:23.440 09:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.440 09:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:23.440 09:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:23.440 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:23.440 Zero copy mechanism will not be used. 00:25:23.440 Running I/O for 2 seconds... 00:25:23.440 [2024-12-13 09:27:17.233202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.440 [2024-12-13 09:27:17.233346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.440 [2024-12-13 09:27:17.233383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.440 [2024-12-13 09:27:17.239018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.440 [2024-12-13 09:27:17.239279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.440 [2024-12-13 09:27:17.239326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.440 [2024-12-13 09:27:17.244899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.440 [2024-12-13 09:27:17.244994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.440 [2024-12-13 09:27:17.245028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.440 [2024-12-13 09:27:17.250378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.440 [2024-12-13 09:27:17.250493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.440 [2024-12-13 09:27:17.250530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.440 [2024-12-13 09:27:17.255868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.440 [2024-12-13 09:27:17.255977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.440 [2024-12-13 09:27:17.256005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.440 [2024-12-13 09:27:17.261371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.440 [2024-12-13 09:27:17.261481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.440 [2024-12-13 09:27:17.261509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.440 [2024-12-13 09:27:17.266802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.440 [2024-12-13 09:27:17.266936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.440 [2024-12-13 09:27:17.266974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.440 [2024-12-13 09:27:17.272321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.440 [2024-12-13 09:27:17.272417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.440 [2024-12-13 09:27:17.272453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.440 [2024-12-13 09:27:17.277686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.440 [2024-12-13 09:27:17.277803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.441 [2024-12-13 09:27:17.277831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.441 [2024-12-13 09:27:17.283159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.441 [2024-12-13 09:27:17.283290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.441 [2024-12-13 09:27:17.283318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.441 [2024-12-13 09:27:17.288713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.441 [2024-12-13 09:27:17.288804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.441 [2024-12-13 09:27:17.288839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.441 [2024-12-13 09:27:17.294001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.441 [2024-12-13 09:27:17.294102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.441 [2024-12-13 09:27:17.294131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.441 [2024-12-13 09:27:17.299654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.441 [2024-12-13 09:27:17.299919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.441 [2024-12-13 09:27:17.299950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.441 [2024-12-13 09:27:17.305438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.441 [2024-12-13 09:27:17.305539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.441 [2024-12-13 09:27:17.305574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.441 [2024-12-13 09:27:17.310924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.441 [2024-12-13 09:27:17.311164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.441 [2024-12-13 09:27:17.311219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.441 [2024-12-13 09:27:17.316674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.441 [2024-12-13 09:27:17.316774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.441 [2024-12-13 09:27:17.316802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.441 [2024-12-13 09:27:17.322011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.441 [2024-12-13 09:27:17.322132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.441 [2024-12-13 09:27:17.322160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.328280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.328415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.328454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.334356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.334473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.334502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.339936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.340058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.340086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.345497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.345605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.345642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.351061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.351391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.351445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.356784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.356885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.356913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.362258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.362412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.362441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.367884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.367993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.368028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.373457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.373576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.373611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.378996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.379270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.379299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.384652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.384747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.384782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.389954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.390072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.390109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.395485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.395604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.395632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.400870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.400982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.401010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.701 [2024-12-13 09:27:17.406266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.701 [2024-12-13 09:27:17.406390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.701 [2024-12-13 09:27:17.406426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.411813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.411919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.411955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.417275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.417408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.417436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.422701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.422990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.423021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.428750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.428859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.428894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.434315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.434421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.434450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.439875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.439975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.440003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.445380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.445485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.445523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.450813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.450939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.450980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.456265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.456393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.456421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.461629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.461747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.461774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.467131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.467261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.467295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.472637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.472728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.472763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.478139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.478240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.478267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.483894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.484150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.484187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.489583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.489689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.489725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.495121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.495283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.495310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.500832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.500944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.500972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.506243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.506366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.506402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.511782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.512009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.512045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.517526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.517634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.517662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.522948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.523273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.523302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.528675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.528766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.528801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.534213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.534353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.534390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.539903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.540004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.540032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.545369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.545489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.545524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.550759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.551026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.551064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.556510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.702 [2024-12-13 09:27:17.556627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.702 [2024-12-13 09:27:17.556655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.702 [2024-12-13 09:27:17.561974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.703 [2024-12-13 09:27:17.562207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.703 [2024-12-13 09:27:17.562236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.703 [2024-12-13 09:27:17.567694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.703 [2024-12-13 09:27:17.567800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.703 [2024-12-13 09:27:17.567835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.703 [2024-12-13 09:27:17.573016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.703 [2024-12-13 09:27:17.573110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.703 [2024-12-13 09:27:17.573144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.703 [2024-12-13 09:27:17.578473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.703 [2024-12-13 09:27:17.578590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.703 [2024-12-13 09:27:17.578618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.703 [2024-12-13 09:27:17.584164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.703 [2024-12-13 09:27:17.584284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.703 [2024-12-13 09:27:17.584330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.963 [2024-12-13 09:27:17.590381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.963 [2024-12-13 09:27:17.590489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.963 [2024-12-13 09:27:17.590528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.963 [2024-12-13 09:27:17.596330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.963 [2024-12-13 09:27:17.596450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.963 [2024-12-13 09:27:17.596478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.963 [2024-12-13 09:27:17.601750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.963 [2024-12-13 09:27:17.601988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.963 [2024-12-13 09:27:17.602017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.963 [2024-12-13 09:27:17.607631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.963 [2024-12-13 09:27:17.607724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.963 [2024-12-13 09:27:17.607759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.613027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.613255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.613291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.618686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.618784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.618812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.624154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.624282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.624339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.629662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.629755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.629790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.635192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.635369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.635427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.640618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.640852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.640881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.646222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.646360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.646395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.651721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.651947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.651984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.657326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.657438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.657466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.662750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.663014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.663045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.668441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.668552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.668586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.673761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.673853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.673888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.679368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.679466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.679493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.684736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.684835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.684863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.690038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.690144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.690178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.695817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.695947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.695975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.701286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.701413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.701440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.706817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.707114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.707151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.712663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.712761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.712811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.718566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.718723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.718751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.724473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.724596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.724625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.730377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.730510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.730550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.736493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.736634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.736664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.742495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.742642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.742686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.748745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.748842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.748882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.754576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.754710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.754747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.760457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.964 [2024-12-13 09:27:17.760560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.964 [2024-12-13 09:27:17.760589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.964 [2024-12-13 09:27:17.766144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.766413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.766452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.772158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.772267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.772315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.777851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.778096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.778126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.783903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.784006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.784035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.789471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.789586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.789622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.795096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.795231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.795281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.800828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.801084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.801113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.806686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.806840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.806905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.812423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.812542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.812578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.817876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.817993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.818022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.823497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.823597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.823625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.828997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.829101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.829136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.834569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.834715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.834752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.840312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.840431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.840460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.965 [2024-12-13 09:27:17.845885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:23.965 [2024-12-13 09:27:17.845993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.965 [2024-12-13 09:27:17.846037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.852164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.852275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.852327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.858201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.858332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.858375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.864019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.864145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.864187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.869720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.869837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.869873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.875554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.875651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.875690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.881108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.881209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.881238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.886723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.886999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.887038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.892859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.893113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.893378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.898738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.899046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.899248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.904723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.904986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.905156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.910569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.910826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.911046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.916466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.916739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.916936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.922253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.922562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.922744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.928312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.928564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.928734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.934040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.934310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.934363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.939934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.940060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.940089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.945617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.945775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.945804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.951368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.951484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.951520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.957055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.957160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.957189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.962647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.962763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.962791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.968298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.968412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.968448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.973754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.974002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.974040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.979737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.979840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.979868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.226 [2024-12-13 09:27:17.985441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.226 [2024-12-13 09:27:17.985547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.226 [2024-12-13 09:27:17.985576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:17.991017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:17.991131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:17.991168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:17.997220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:17.997497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:17.997527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.003885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.003971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.004000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.009893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.010201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.010503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.016578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.016889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.017082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.023200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.023488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.023522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.029441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.029565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.029618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.035190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.035356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.035412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.041004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.041236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.041266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.047061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.047196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.047232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.052991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.053236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.053273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.059015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.059162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.059214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.064872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.065147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.065177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.070903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.071017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.071057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.076671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.076794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.076823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.082327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.082440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.082469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.087920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.088171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.088209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.093916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.094013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.094051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.099889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.100019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.100049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.106035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.106139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.106168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.227 [2024-12-13 09:27:18.112886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.227 [2024-12-13 09:27:18.113040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.227 [2024-12-13 09:27:18.113080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.119803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.119953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.119983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.125869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.125985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.126014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.132057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.132169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.132209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.138113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.138214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.138242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.144000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.144100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.144129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.149890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.149998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.150033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.155769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.155861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.155897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.161336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.161455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.161483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.166656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.166992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.167031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.172448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.172541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.172575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.177785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.178059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.178089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.183795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.183910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.183937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.189234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.189532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.189570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.195051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.195146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.195196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.200505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.200604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.200633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.205821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.205923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.205951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.211214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.211335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.211379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.216595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.216702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.216730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.221931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.222032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.222074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.227446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.227554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.227582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.488 5420.00 IOPS, 677.50 MiB/s [2024-12-13T09:27:18.378Z] [2024-12-13 09:27:18.233581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.233751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.233780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.239436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.239526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.239554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.244949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.245208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.245237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.488 [2024-12-13 09:27:18.250935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.488 [2024-12-13 09:27:18.251056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.488 [2024-12-13 09:27:18.251086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.256426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.256518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.256545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.261874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.261995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.262023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.267349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.267462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.267490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.272691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.272798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.272825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.278110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.278207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.278234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.283711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.283823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.283850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.289046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.289138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.289165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.294529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.294621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.294649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.300012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.300105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.300133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.305465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.305567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.305595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.310789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.311102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.311133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.316551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.316659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.316686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.321965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.322209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.322237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.327833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.327952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.327979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.333160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.333253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.333280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.338585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.338692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.338720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.344013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.344106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.344133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.349430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.349526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.349554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.354837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.354995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.355025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.360369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.360462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.360489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.365727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.365987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.366017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.489 [2024-12-13 09:27:18.371859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.489 [2024-12-13 09:27:18.371981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.489 [2024-12-13 09:27:18.372009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.377887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.377982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.378010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.384033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.384154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.384182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.389426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.389526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.389553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.394854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.395049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.395080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.400509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.400610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.400637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.405842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.405947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.405974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.411429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.411534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.411562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.416696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.416940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.416969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.422241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.422363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.422391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.427774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.428039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.428069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.433449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.433556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.433584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.438782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.439076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.439107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.444556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.444648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.444676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.449908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.450022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.450050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.455407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.455535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.455563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.460843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.460948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.460976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.466252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.750 [2024-12-13 09:27:18.466375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.750 [2024-12-13 09:27:18.466404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.750 [2024-12-13 09:27:18.471689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.471786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.471814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.477081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.477174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.477214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.482763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.483038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.483070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.488578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.488669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.488698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.494086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.494196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.494225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.499696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.499806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.499834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.505125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.505217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.505244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.510574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.510668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.510696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.516185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.516298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.516357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.521641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.521734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.521761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.527004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.527335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.527366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.532815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.533078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.533241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.538392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.538679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.538877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.544374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.544651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.544946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.550048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.550337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.550572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.555835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.556106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.556296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.561624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.561899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.562148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.567254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.567553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.567744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.573066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.573347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.573591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.578712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.579023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.579336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.584562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.584656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.584685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.589993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.590249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.590278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.595877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.595984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.596011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.601424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.601532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.601561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.606749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.606888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.606917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.612269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.612393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.612422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.617686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.751 [2024-12-13 09:27:18.617793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.751 [2024-12-13 09:27:18.617820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.751 [2024-12-13 09:27:18.623168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.752 [2024-12-13 09:27:18.623275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.752 [2024-12-13 09:27:18.623318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.752 [2024-12-13 09:27:18.628616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.752 [2024-12-13 09:27:18.628893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.752 [2024-12-13 09:27:18.628923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.752 [2024-12-13 09:27:18.634583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:24.752 [2024-12-13 09:27:18.634694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.752 [2024-12-13 09:27:18.634722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.640659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.640766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.640794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.646444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.646541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.646569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.651877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.652138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.652168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.657586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.657705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.657733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.663057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.663174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.663231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.668542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.668634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.668663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.673984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.674100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.674127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.679526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.679640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.679668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.684935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.685042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.685069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.690369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.690464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.690492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.695863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.695955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.695982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.701383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.701497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.701526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.706902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.707015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.707044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.712383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.712473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.712501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.717812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.717916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.717943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.723289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.723431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.723476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.728696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.728802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.728829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.734127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.734218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.734246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.739644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.739887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.739916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.745328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.745421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.745450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.750687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.750999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.751031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.756512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.756603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.012 [2024-12-13 09:27:18.756631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.012 [2024-12-13 09:27:18.761920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.012 [2024-12-13 09:27:18.762163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.762192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.767792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.767905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.767932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.773131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.773403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.773435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.778988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.779310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.779537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.784765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.785042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.785258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.790403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.790668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.790871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.796250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.796539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.796743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.801930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.802195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.802379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.807788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.808072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.808249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.813744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.814017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.814189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.819598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.819885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.820060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.825392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.825512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.825543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.830787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.831114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.831147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.836634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.836727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.836755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.842028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.842136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.842164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.847639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.847732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.847760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.852990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.853104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.853132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.858372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.858467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.858495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.863835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.863937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.863965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.869216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.869339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.869368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.874645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.874751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.874779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.880135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.880246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.880274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.885763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.886023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.886052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.891582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.891673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.891701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.013 [2024-12-13 09:27:18.897339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.013 [2024-12-13 09:27:18.897451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.013 [2024-12-13 09:27:18.897481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.273 [2024-12-13 09:27:18.903417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.273 [2024-12-13 09:27:18.903508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.273 [2024-12-13 09:27:18.903536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.273 [2024-12-13 09:27:18.909115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.273 [2024-12-13 09:27:18.909373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.273 [2024-12-13 09:27:18.909404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.273 [2024-12-13 09:27:18.914939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.273 [2024-12-13 09:27:18.915266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.273 [2024-12-13 09:27:18.915446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.273 [2024-12-13 09:27:18.920753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.273 [2024-12-13 09:27:18.921015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.273 [2024-12-13 09:27:18.921229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.273 [2024-12-13 09:27:18.926574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.273 [2024-12-13 09:27:18.926870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.273 [2024-12-13 09:27:18.927129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.273 [2024-12-13 09:27:18.932425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.273 [2024-12-13 09:27:18.932687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.932844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.938192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.938483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.938656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.944107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.944371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.944615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.949867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.950121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.950402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.955770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.956036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.956193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.961524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.961626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.961656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.966930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.967221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.967268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.972649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.972741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.972770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.978030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.978149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.978177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.983625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.983716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.983745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.988959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.989064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.989092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.994460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.994551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:18.994579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:18.999890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:18.999996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.000024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.005217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.005340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.005369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.010596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.010704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.010731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.016085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.016179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.016207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.021525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.021648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.021676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.026887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.027144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.027189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.032735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.032841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.032869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.038173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.038475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.038506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.043872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.043969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.043995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.049231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.049354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.049382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.055048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.055166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.055221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.061029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.061143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.061172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.066808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.066945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.066976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.073008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.073126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.073169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.079049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.079218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.079261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.274 [2024-12-13 09:27:19.085098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.274 [2024-12-13 09:27:19.085212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.274 [2024-12-13 09:27:19.085240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.091080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.091241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.091270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.096912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.097029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.097057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.102770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.102910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.102939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.108432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.108553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.108582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.114116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.114369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.114399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.120030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.120125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.120153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.125974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.126209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.126239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.132397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.132498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.132528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.138744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.138907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.138941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.145170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.145274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.145338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.151588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.151742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.151771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.275 [2024-12-13 09:27:19.157824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.275 [2024-12-13 09:27:19.158061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.275 [2024-12-13 09:27:19.158092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.534 [2024-12-13 09:27:19.164809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.534 [2024-12-13 09:27:19.165062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.534 [2024-12-13 09:27:19.165091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.534 [2024-12-13 09:27:19.171347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.534 [2024-12-13 09:27:19.171490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.534 [2024-12-13 09:27:19.171520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.534 [2024-12-13 09:27:19.176994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.534 [2024-12-13 09:27:19.177087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.534 [2024-12-13 09:27:19.177115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.534 [2024-12-13 09:27:19.182755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.534 [2024-12-13 09:27:19.183031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.534 [2024-12-13 09:27:19.183375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.534 [2024-12-13 09:27:19.188734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.534 [2024-12-13 09:27:19.188828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.534 [2024-12-13 09:27:19.188857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.534 [2024-12-13 09:27:19.194377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.535 [2024-12-13 09:27:19.194493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.535 [2024-12-13 09:27:19.194521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.535 [2024-12-13 09:27:19.199976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.535 [2024-12-13 09:27:19.200070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.535 [2024-12-13 09:27:19.200098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.535 [2024-12-13 09:27:19.205716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.535 [2024-12-13 09:27:19.205952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.535 [2024-12-13 09:27:19.205982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.535 [2024-12-13 09:27:19.211765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.535 [2024-12-13 09:27:19.212029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.535 [2024-12-13 09:27:19.212220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.535 [2024-12-13 09:27:19.217617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.535 [2024-12-13 09:27:19.217895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.535 [2024-12-13 09:27:19.218065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.535 [2024-12-13 09:27:19.223498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.535 [2024-12-13 09:27:19.223751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.535 [2024-12-13 09:27:19.223906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.535 [2024-12-13 09:27:19.229247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:25:25.535 [2024-12-13 09:27:19.229564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.535 [2024-12-13 09:27:19.229784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.535 5447.50 IOPS, 680.94 MiB/s 00:25:25.535 Latency(us) 00:25:25.535 [2024-12-13T09:27:19.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.535 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:25.535 nvme0n1 : 2.00 5444.06 680.51 0.00 0.00 2930.95 1832.03 7685.59 00:25:25.535 [2024-12-13T09:27:19.425Z] =================================================================================================================== 00:25:25.535 [2024-12-13T09:27:19.425Z] Total : 5444.06 680.51 0.00 0.00 2930.95 1832.03 7685.59 00:25:25.535 { 00:25:25.535 "results": [ 00:25:25.535 { 00:25:25.535 "job": "nvme0n1", 00:25:25.535 "core_mask": "0x2", 00:25:25.535 "workload": "randwrite", 00:25:25.535 "status": "finished", 00:25:25.535 "queue_depth": 16, 00:25:25.535 "io_size": 131072, 00:25:25.535 "runtime": 2.004385, 00:25:25.535 "iops": 5444.063889921347, 00:25:25.535 "mibps": 680.5079862401684, 00:25:25.535 "io_failed": 0, 00:25:25.535 "io_timeout": 0, 00:25:25.535 "avg_latency_us": 2930.953281791522, 00:25:25.535 "min_latency_us": 1832.0290909090909, 00:25:25.535 "max_latency_us": 7685.585454545455 00:25:25.535 } 00:25:25.535 ], 00:25:25.535 "core_count": 1 00:25:25.535 } 00:25:25.535 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:25.535 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:25.535 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:25.535 | .driver_specific 00:25:25.535 | .nvme_error 00:25:25.535 | .status_code 00:25:25.535 | .command_transient_transport_error' 00:25:25.535 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 352 > 0 )) 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88480 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 88480 ']' 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 88480 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88480 00:25:25.794 killing process with pid 88480 00:25:25.794 Received shutdown signal, test time was about 2.000000 seconds 00:25:25.794 00:25:25.794 Latency(us) 00:25:25.794 [2024-12-13T09:27:19.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.794 [2024-12-13T09:27:19.684Z] =================================================================================================================== 00:25:25.794 [2024-12-13T09:27:19.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88480' 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 88480 00:25:25.794 09:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 88480 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 88253 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 88253 ']' 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 88253 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88253 00:25:26.731 killing process with pid 88253 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88253' 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 88253 00:25:26.731 09:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 88253 00:25:27.669 ************************************ 00:25:27.669 END TEST nvmf_digest_error 00:25:27.669 ************************************ 00:25:27.669 00:25:27.669 real 0m21.491s 00:25:27.669 user 0m41.189s 00:25:27.669 sys 0m4.420s 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:27.670 rmmod nvme_tcp 00:25:27.670 rmmod nvme_fabrics 00:25:27.670 rmmod nvme_keyring 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 88253 ']' 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 88253 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 88253 ']' 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 88253 00:25:27.670 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88253) - No such process 00:25:27.670 Process with pid 88253 is not found 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 88253 is not found' 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.670 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:25:27.929 00:25:27.929 real 0m44.969s 00:25:27.929 user 1m24.910s 00:25:27.929 sys 0m9.170s 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:27.929 ************************************ 00:25:27.929 END TEST nvmf_digest 00:25:27.929 ************************************ 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.929 ************************************ 00:25:27.929 START TEST nvmf_host_multipath 00:25:27.929 ************************************ 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:27.929 * Looking for test storage... 00:25:27.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.929 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:25:27.930 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:25:27.930 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.930 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.930 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:28.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.190 --rc genhtml_branch_coverage=1 00:25:28.190 --rc genhtml_function_coverage=1 00:25:28.190 --rc genhtml_legend=1 00:25:28.190 --rc geninfo_all_blocks=1 00:25:28.190 --rc geninfo_unexecuted_blocks=1 00:25:28.190 00:25:28.190 ' 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:28.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.190 --rc genhtml_branch_coverage=1 00:25:28.190 --rc genhtml_function_coverage=1 00:25:28.190 --rc genhtml_legend=1 00:25:28.190 --rc geninfo_all_blocks=1 00:25:28.190 --rc geninfo_unexecuted_blocks=1 00:25:28.190 00:25:28.190 ' 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:28.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.190 --rc genhtml_branch_coverage=1 00:25:28.190 --rc genhtml_function_coverage=1 00:25:28.190 --rc genhtml_legend=1 00:25:28.190 --rc geninfo_all_blocks=1 00:25:28.190 --rc geninfo_unexecuted_blocks=1 00:25:28.190 00:25:28.190 ' 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:28.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.190 --rc genhtml_branch_coverage=1 00:25:28.190 --rc genhtml_function_coverage=1 00:25:28.190 --rc genhtml_legend=1 00:25:28.190 --rc geninfo_all_blocks=1 00:25:28.190 --rc geninfo_unexecuted_blocks=1 00:25:28.190 00:25:28.190 ' 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:28.190 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:25:28.190 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:28.191 Cannot find device "nvmf_init_br" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:28.191 Cannot find device "nvmf_init_br2" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:28.191 Cannot find device "nvmf_tgt_br" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:28.191 Cannot find device "nvmf_tgt_br2" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:28.191 Cannot find device "nvmf_init_br" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:28.191 Cannot find device "nvmf_init_br2" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:28.191 Cannot find device "nvmf_tgt_br" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:28.191 Cannot find device "nvmf_tgt_br2" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:28.191 Cannot find device "nvmf_br" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:28.191 Cannot find device "nvmf_init_if" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:28.191 Cannot find device "nvmf_init_if2" 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:28.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:28.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:28.191 09:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:28.191 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:28.191 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:28.191 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:28.191 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:28.191 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:28.191 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:28.191 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:28.191 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:28.191 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:28.451 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:28.451 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:25:28.451 00:25:28.451 --- 10.0.0.3 ping statistics --- 00:25:28.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.451 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:28.451 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:28.451 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:25:28.451 00:25:28.451 --- 10.0.0.4 ping statistics --- 00:25:28.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.451 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:28.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:28.451 00:25:28.451 --- 10.0.0.1 ping statistics --- 00:25:28.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.451 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:28.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:25:28.451 00:25:28.451 --- 10.0.0.2 ping statistics --- 00:25:28.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.451 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=88816 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 88816 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 88816 ']' 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.451 09:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:28.710 [2024-12-13 09:27:22.377842] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:28.710 [2024-12-13 09:27:22.378013] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.710 [2024-12-13 09:27:22.565726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:28.969 [2024-12-13 09:27:22.691014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.969 [2024-12-13 09:27:22.691085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.969 [2024-12-13 09:27:22.691109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.969 [2024-12-13 09:27:22.691139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.969 [2024-12-13 09:27:22.691157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.969 [2024-12-13 09:27:22.693296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.969 [2024-12-13 09:27:22.693317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.969 [2024-12-13 09:27:22.850520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:29.537 09:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.537 09:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:29.537 09:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:29.537 09:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:29.537 09:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:29.537 09:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.537 09:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=88816 00:25:29.537 09:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:30.105 [2024-12-13 09:27:23.695979] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.105 09:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:30.105 Malloc0 00:25:30.363 09:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:30.363 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.621 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:30.880 [2024-12-13 09:27:24.654188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:30.880 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:31.138 [2024-12-13 09:27:24.870250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:31.138 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=88866 00:25:31.138 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:31.138 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:31.138 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 88866 /var/tmp/bdevperf.sock 00:25:31.138 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 88866 ']' 00:25:31.138 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.138 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.138 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.138 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.138 09:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:32.086 09:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.086 09:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:32.086 09:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:32.345 09:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:32.932 Nvme0n1 00:25:32.932 09:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:32.932 Nvme0n1 00:25:33.191 09:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:25:33.191 09:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:34.127 09:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:34.127 09:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:34.386 09:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:34.645 09:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:34.645 09:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88916 00:25:34.645 09:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88816 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:34.645 09:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:41.213 Attaching 4 probes... 00:25:41.213 @path[10.0.0.3, 4421]: 16076 00:25:41.213 @path[10.0.0.3, 4421]: 16503 00:25:41.213 @path[10.0.0.3, 4421]: 16720 00:25:41.213 @path[10.0.0.3, 4421]: 16346 00:25:41.213 @path[10.0.0.3, 4421]: 16451 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88916 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:41.213 09:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:41.472 09:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:41.472 09:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89026 00:25:41.472 09:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88816 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:41.472 09:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:48.038 Attaching 4 probes... 00:25:48.038 @path[10.0.0.3, 4420]: 15825 00:25:48.038 @path[10.0.0.3, 4420]: 16076 00:25:48.038 @path[10.0.0.3, 4420]: 16196 00:25:48.038 @path[10.0.0.3, 4420]: 16277 00:25:48.038 @path[10.0.0.3, 4420]: 16289 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89026 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:48.038 09:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:48.297 09:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:48.297 09:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88816 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:48.297 09:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89139 00:25:48.297 09:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:54.863 Attaching 4 probes... 00:25:54.863 @path[10.0.0.3, 4421]: 11850 00:25:54.863 @path[10.0.0.3, 4421]: 16023 00:25:54.863 @path[10.0.0.3, 4421]: 15941 00:25:54.863 @path[10.0.0.3, 4421]: 15952 00:25:54.863 @path[10.0.0.3, 4421]: 15907 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89139 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:54.863 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:55.122 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:55.122 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89257 00:25:55.122 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88816 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:55.122 09:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:01.689 09:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:01.689 09:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:01.689 Attaching 4 probes... 00:26:01.689 00:26:01.689 00:26:01.689 00:26:01.689 00:26:01.689 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89257 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:01.689 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:01.948 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:26:01.948 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89364 00:26:01.948 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88816 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:01.948 09:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:08.515 Attaching 4 probes... 00:26:08.515 @path[10.0.0.3, 4421]: 15573 00:26:08.515 @path[10.0.0.3, 4421]: 15700 00:26:08.515 @path[10.0.0.3, 4421]: 15801 00:26:08.515 @path[10.0.0.3, 4421]: 15688 00:26:08.515 @path[10.0.0.3, 4421]: 15911 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89364 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:08.515 09:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:08.515 09:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:26:09.451 09:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:26:09.451 09:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89488 00:26:09.451 09:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88816 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:09.451 09:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:16.018 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:16.018 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:16.018 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:16.018 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:16.018 Attaching 4 probes... 00:26:16.018 @path[10.0.0.3, 4420]: 15470 00:26:16.018 @path[10.0.0.3, 4420]: 15694 00:26:16.018 @path[10.0.0.3, 4420]: 15629 00:26:16.018 @path[10.0.0.3, 4420]: 15640 00:26:16.018 @path[10.0.0.3, 4420]: 15791 00:26:16.018 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:16.018 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:16.019 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:16.019 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:16.019 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:16.019 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:16.019 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89488 00:26:16.019 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:16.019 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:16.019 [2024-12-13 09:28:09.694267] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:16.019 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:16.279 09:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:26:22.880 09:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:26:22.880 09:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89657 00:26:22.880 09:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88816 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:22.880 09:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:28.153 09:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:28.154 09:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:28.413 Attaching 4 probes... 00:26:28.413 @path[10.0.0.3, 4421]: 15780 00:26:28.413 @path[10.0.0.3, 4421]: 15951 00:26:28.413 @path[10.0.0.3, 4421]: 15836 00:26:28.413 @path[10.0.0.3, 4421]: 15786 00:26:28.413 @path[10.0.0.3, 4421]: 15800 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89657 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 88866 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 88866 ']' 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 88866 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88866 00:26:28.413 killing process with pid 88866 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88866' 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 88866 00:26:28.413 09:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 88866 00:26:28.672 { 00:26:28.672 "results": [ 00:26:28.672 { 00:26:28.672 "job": "Nvme0n1", 00:26:28.672 "core_mask": "0x4", 00:26:28.672 "workload": "verify", 00:26:28.672 "status": "terminated", 00:26:28.672 "verify_range": { 00:26:28.672 "start": 0, 00:26:28.672 "length": 16384 00:26:28.672 }, 00:26:28.672 "queue_depth": 128, 00:26:28.672 "io_size": 4096, 00:26:28.672 "runtime": 55.382479, 00:26:28.672 "iops": 6792.418952571625, 00:26:28.672 "mibps": 26.53288653348291, 00:26:28.672 "io_failed": 0, 00:26:28.672 "io_timeout": 0, 00:26:28.672 "avg_latency_us": 18819.92220940065, 00:26:28.672 "min_latency_us": 588.3345454545455, 00:26:28.672 "max_latency_us": 7046430.72 00:26:28.672 } 00:26:28.672 ], 00:26:28.672 "core_count": 1 00:26:28.672 } 00:26:29.618 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 88866 00:26:29.618 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:29.618 [2024-12-13 09:27:24.968638] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:29.618 [2024-12-13 09:27:24.968823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88866 ] 00:26:29.618 [2024-12-13 09:27:25.130742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.618 [2024-12-13 09:27:25.239310] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.618 [2024-12-13 09:27:25.406460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:29.618 Running I/O for 90 seconds... 00:26:29.618 8014.00 IOPS, 31.30 MiB/s [2024-12-13T09:28:23.508Z] 8156.50 IOPS, 31.86 MiB/s [2024-12-13T09:28:23.508Z] 8165.67 IOPS, 31.90 MiB/s [2024-12-13T09:28:23.508Z] 8190.25 IOPS, 31.99 MiB/s [2024-12-13T09:28:23.508Z] 8224.20 IOPS, 32.13 MiB/s [2024-12-13T09:28:23.508Z] 8213.50 IOPS, 32.08 MiB/s [2024-12-13T09:28:23.508Z] 8217.29 IOPS, 32.10 MiB/s [2024-12-13T09:28:23.508Z] 8194.12 IOPS, 32.01 MiB/s [2024-12-13T09:28:23.508Z] [2024-12-13 09:27:35.239934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.618 [2024-12-13 09:27:35.240042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.618 [2024-12-13 09:27:35.240122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.618 [2024-12-13 09:27:35.240150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.618 [2024-12-13 09:27:35.240180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.618 [2024-12-13 09:27:35.240199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.618 [2024-12-13 09:27:35.240226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.618 [2024-12-13 09:27:35.240245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.618 [2024-12-13 09:27:35.240271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.618 [2024-12-13 09:27:35.240302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.618 [2024-12-13 09:27:35.240335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.618 [2024-12-13 09:27:35.240354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.619 [2024-12-13 09:27:35.240399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.619 [2024-12-13 09:27:35.240444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.240963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.240988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.241591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.619 [2024-12-13 09:27:35.241646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.619 [2024-12-13 09:27:35.241708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.619 [2024-12-13 09:27:35.241764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.619 [2024-12-13 09:27:35.241827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.619 [2024-12-13 09:27:35.241872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.619 [2024-12-13 09:27:35.241918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.619 [2024-12-13 09:27:35.241963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.241989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.619 [2024-12-13 09:27:35.242008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.242034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.242053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.242080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.242099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.242145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.619 [2024-12-13 09:27:35.242166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.619 [2024-12-13 09:27:35.242192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.242212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.242258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.242304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.242381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.242439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.242491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.242538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.242586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.242631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.242676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.242721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.242767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.242812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.242901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.242953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.242982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.243349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.243400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.243446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.243492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.243537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.243583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.243635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.620 [2024-12-13 09:27:35.243684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.243967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.243993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.620 [2024-12-13 09:27:35.244013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.620 [2024-12-13 09:27:35.244039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.244499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.244544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.244589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.244634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.244679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.244724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.244769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.244829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.244967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.244994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.621 [2024-12-13 09:27:35.245628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.245676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.245737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.245787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.245833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.245878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.245924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.621 [2024-12-13 09:27:35.245950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.621 [2024-12-13 09:27:35.245969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:35.247653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:35.247705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:35.247757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:35.247786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:35.247815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:35.247835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:35.247861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:35.247880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:35.247907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:35.247926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:35.247952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:35.247971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:35.247997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:35.248017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:35.248043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:35.248063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:35.248108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:35.248137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.622 8130.89 IOPS, 31.76 MiB/s [2024-12-13T09:28:23.512Z] 8131.40 IOPS, 31.76 MiB/s [2024-12-13T09:28:23.512Z] 8127.45 IOPS, 31.75 MiB/s [2024-12-13T09:28:23.512Z] 8126.17 IOPS, 31.74 MiB/s [2024-12-13T09:28:23.512Z] 8122.62 IOPS, 31.73 MiB/s [2024-12-13T09:28:23.512Z] 8124.71 IOPS, 31.74 MiB/s [2024-12-13T09:28:23.512Z] [2024-12-13 09:27:41.779240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.779329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.779434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.779482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.779552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.779595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.779639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.779683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.779726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.779770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.779813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.779856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.779915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.779959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.779985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.622 [2024-12-13 09:27:41.780504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.780576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.780622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.780666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.780719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.622 [2024-12-13 09:27:41.780766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.622 [2024-12-13 09:27:41.780792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.780810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.780835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.780854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.780879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.780897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.780923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.780941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.780967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.780985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.781669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.781735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.781780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.781825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.781868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.781923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.781967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.781992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.782010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.782035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.623 [2024-12-13 09:27:41.782054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.782079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.782097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.782123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.782142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.782167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.782186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.782211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.782229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.782254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.782272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.782297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.623 [2024-12-13 09:27:41.782346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.623 [2024-12-13 09:27:41.782374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.782393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.782439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.782506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.782556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.782601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.782645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.782704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.782747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.782790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.782834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.782913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.782960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.782986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.783317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.783379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.783423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.783468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.783510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.783554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.783597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.783641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.783963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.783989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.624 [2024-12-13 09:27:41.784008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.784032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.784051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.784076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.784094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.784119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.784138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.784162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.784181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.784206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.624 [2024-12-13 09:27:41.784225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.624 [2024-12-13 09:27:41.784258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.784957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.784983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.785001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.785026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.785045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.785070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.785089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.785783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.625 [2024-12-13 09:27:41.785817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.785859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:41.785880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.785914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:41.785934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.785967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:41.785987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.786019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:41.786038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.786071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:41.786090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.786123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:41.786154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.786190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:41.786211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:41.786262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:41.786317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.625 8030.00 IOPS, 31.37 MiB/s [2024-12-13T09:28:23.515Z] 7602.19 IOPS, 29.70 MiB/s [2024-12-13T09:28:23.515Z] 7624.65 IOPS, 29.78 MiB/s [2024-12-13T09:28:23.515Z] 7644.17 IOPS, 29.86 MiB/s [2024-12-13T09:28:23.515Z] 7666.26 IOPS, 29.95 MiB/s [2024-12-13T09:28:23.515Z] 7680.95 IOPS, 30.00 MiB/s [2024-12-13T09:28:23.515Z] 7693.86 IOPS, 30.05 MiB/s [2024-12-13T09:28:23.515Z] [2024-12-13 09:27:48.823653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.823718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.823810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.823838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.823867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.823886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.823912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.823931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.823956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.823975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.824000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.824019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.824043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.824062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.824087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.824106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.824131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.824149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.824202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.824222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.824247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.625 [2024-12-13 09:27:48.824266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.625 [2024-12-13 09:27:48.824319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.824343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.824390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.824435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.824479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.824524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.824569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.824617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.824677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.824721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.824765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.824818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.824865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.824911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.824957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.824980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.825781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.825826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.825871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.825938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.825964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.825983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.826009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.826028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.826053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.826072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.826109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.826130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.826156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.626 [2024-12-13 09:27:48.826176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.826221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.826246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.826273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.826293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.826337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.626 [2024-12-13 09:27:48.826360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.626 [2024-12-13 09:27:48.826386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.826405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.826450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.826495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.826540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.826584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.627 [2024-12-13 09:27:48.826629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.627 [2024-12-13 09:27:48.826676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.627 [2024-12-13 09:27:48.826738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.627 [2024-12-13 09:27:48.826800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.627 [2024-12-13 09:27:48.826855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.627 [2024-12-13 09:27:48.826940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.826974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.627 [2024-12-13 09:27:48.826996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.627 [2024-12-13 09:27:48.827046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.827962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.827991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.828011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.828038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.828066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.627 [2024-12-13 09:27:48.828095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.627 [2024-12-13 09:27:48.828116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.828161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.828206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.828252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.828315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.828362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.828422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.828467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.828511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.828555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.828600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.828644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.828699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.828745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.828789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.828834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.828885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.828930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.828956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.828975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.829020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.829064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.829800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.829819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.830580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.628 [2024-12-13 09:27:48.830625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.830670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.830691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.830725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.830744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.830777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.830797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.628 [2024-12-13 09:27:48.830829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.628 [2024-12-13 09:27:48.830875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:27:48.830930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:27:48.830952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:27:48.830988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:27:48.831009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:27:48.831044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:27:48.831066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:27:48.831137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:27:48.831165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.629 7655.41 IOPS, 29.90 MiB/s [2024-12-13T09:28:23.519Z] 7322.57 IOPS, 28.60 MiB/s [2024-12-13T09:28:23.519Z] 7017.46 IOPS, 27.41 MiB/s [2024-12-13T09:28:23.519Z] 6736.76 IOPS, 26.32 MiB/s [2024-12-13T09:28:23.519Z] 6477.65 IOPS, 25.30 MiB/s [2024-12-13T09:28:23.519Z] 6237.74 IOPS, 24.37 MiB/s [2024-12-13T09:28:23.519Z] 6014.96 IOPS, 23.50 MiB/s [2024-12-13T09:28:23.519Z] 5834.28 IOPS, 22.79 MiB/s [2024-12-13T09:28:23.519Z] 5898.73 IOPS, 23.04 MiB/s [2024-12-13T09:28:23.519Z] 5961.87 IOPS, 23.29 MiB/s [2024-12-13T09:28:23.519Z] 6022.06 IOPS, 23.52 MiB/s [2024-12-13T09:28:23.519Z] 6079.82 IOPS, 23.75 MiB/s [2024-12-13T09:28:23.519Z] 6133.47 IOPS, 23.96 MiB/s [2024-12-13T09:28:23.519Z] 6180.63 IOPS, 24.14 MiB/s [2024-12-13T09:28:23.519Z] [2024-12-13 09:28:02.129944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.130023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.130124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.130209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.130254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.130331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.130378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.130423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.130468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.629 [2024-12-13 09:28:02.130513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.629 [2024-12-13 09:28:02.130559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.629 [2024-12-13 09:28:02.130604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.629 [2024-12-13 09:28:02.130649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.629 [2024-12-13 09:28:02.130708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.629 [2024-12-13 09:28:02.130751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.629 [2024-12-13 09:28:02.130812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.629 [2024-12-13 09:28:02.130907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.130972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.629 [2024-12-13 09:28:02.131643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.629 [2024-12-13 09:28:02.131679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.629 [2024-12-13 09:28:02.131698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.131729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.131747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.131763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.131781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.131797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.131815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.131831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.131849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.131865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.131883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.131900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.131917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.131947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.131967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.131984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.132584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.132618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.132652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.132686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.132720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.132753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.132788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.630 [2024-12-13 09:28:02.132822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.132969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.132987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.133003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.133020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.133037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.133054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.133071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.133089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.133106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.133124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.133141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.630 [2024-12-13 09:28:02.133159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.630 [2024-12-13 09:28:02.133175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.133442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.133477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.133512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.133546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.133580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.133615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.133649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.133683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.133949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.631 [2024-12-13 09:28:02.133966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.134019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.134055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.134106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.134150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.134186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.134222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.631 [2024-12-13 09:28:02.134264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bf00 is same with the state(6) to be set 00:26:29.631 [2024-12-13 09:28:02.134332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.631 [2024-12-13 09:28:02.134355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.631 [2024-12-13 09:28:02.134371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38920 len:8 PRP1 0x0 PRP2 0x0 00:26:29.631 [2024-12-13 09:28:02.134388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.631 [2024-12-13 09:28:02.134449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.631 [2024-12-13 09:28:02.134463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39440 len:8 PRP1 0x0 PRP2 0x0 00:26:29.631 [2024-12-13 09:28:02.134479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.631 [2024-12-13 09:28:02.134507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.631 [2024-12-13 09:28:02.134519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39448 len:8 PRP1 0x0 PRP2 0x0 00:26:29.631 [2024-12-13 09:28:02.134535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.631 [2024-12-13 09:28:02.134562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.631 [2024-12-13 09:28:02.134575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39456 len:8 PRP1 0x0 PRP2 0x0 00:26:29.631 [2024-12-13 09:28:02.134591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.631 [2024-12-13 09:28:02.134618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.631 [2024-12-13 09:28:02.134631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39464 len:8 PRP1 0x0 PRP2 0x0 00:26:29.631 [2024-12-13 09:28:02.134646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.631 [2024-12-13 09:28:02.134674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.631 [2024-12-13 09:28:02.134686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39472 len:8 PRP1 0x0 PRP2 0x0 00:26:29.631 [2024-12-13 09:28:02.134702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.631 [2024-12-13 09:28:02.134717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.134729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.134741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39480 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.134756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.134771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.134791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.134805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39488 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.134821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.134836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.134876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.134908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39496 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.134924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.134940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.134953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.134966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39504 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.134982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.134998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39512 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39520 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39528 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39536 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39544 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39552 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39560 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39568 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39576 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39584 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39592 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39600 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39608 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39616 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.135883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.632 [2024-12-13 09:28:02.135894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.632 [2024-12-13 09:28:02.135907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39624 len:8 PRP1 0x0 PRP2 0x0 00:26:29.632 [2024-12-13 09:28:02.135922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.136300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.632 [2024-12-13 09:28:02.136333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.136353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.632 [2024-12-13 09:28:02.136370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.136387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.632 [2024-12-13 09:28:02.136402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.632 [2024-12-13 09:28:02.136418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.633 [2024-12-13 09:28:02.136434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.633 [2024-12-13 09:28:02.136453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.633 [2024-12-13 09:28:02.136469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.633 [2024-12-13 09:28:02.136495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:29.633 [2024-12-13 09:28:02.137674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:29.633 [2024-12-13 09:28:02.137753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:26:29.633 [2024-12-13 09:28:02.138183] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.633 [2024-12-13 09:28:02.138224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b500 with addr=10.0.0.3, port=4421 00:26:29.633 [2024-12-13 09:28:02.138246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:29.633 [2024-12-13 09:28:02.138352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:26:29.633 [2024-12-13 09:28:02.138401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:29.633 [2024-12-13 09:28:02.138439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:29.633 [2024-12-13 09:28:02.138466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:29.633 [2024-12-13 09:28:02.138484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:29.633 [2024-12-13 09:28:02.138502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:29.633 6224.08 IOPS, 24.31 MiB/s [2024-12-13T09:28:23.523Z] 6258.89 IOPS, 24.45 MiB/s [2024-12-13T09:28:23.523Z] 6301.55 IOPS, 24.62 MiB/s [2024-12-13T09:28:23.523Z] 6342.03 IOPS, 24.77 MiB/s [2024-12-13T09:28:23.523Z] 6379.07 IOPS, 24.92 MiB/s [2024-12-13T09:28:23.523Z] 6414.12 IOPS, 25.06 MiB/s [2024-12-13T09:28:23.523Z] 6447.88 IOPS, 25.19 MiB/s [2024-12-13T09:28:23.523Z] 6475.98 IOPS, 25.30 MiB/s [2024-12-13T09:28:23.523Z] 6504.80 IOPS, 25.41 MiB/s [2024-12-13T09:28:23.523Z] 6533.76 IOPS, 25.52 MiB/s [2024-12-13T09:28:23.523Z] [2024-12-13 09:28:12.199290] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:29.633 6563.11 IOPS, 25.64 MiB/s [2024-12-13T09:28:23.523Z] 6593.30 IOPS, 25.76 MiB/s [2024-12-13T09:28:23.523Z] 6623.60 IOPS, 25.87 MiB/s [2024-12-13T09:28:23.523Z] 6651.31 IOPS, 25.98 MiB/s [2024-12-13T09:28:23.523Z] 6672.94 IOPS, 26.07 MiB/s [2024-12-13T09:28:23.523Z] 6696.98 IOPS, 26.16 MiB/s [2024-12-13T09:28:23.523Z] 6720.50 IOPS, 26.25 MiB/s [2024-12-13T09:28:23.523Z] 6745.00 IOPS, 26.35 MiB/s [2024-12-13T09:28:23.523Z] 6766.17 IOPS, 26.43 MiB/s [2024-12-13T09:28:23.523Z] 6787.09 IOPS, 26.51 MiB/s [2024-12-13T09:28:23.523Z] Received shutdown signal, test time was about 55.383289 seconds 00:26:29.633 00:26:29.633 Latency(us) 00:26:29.633 [2024-12-13T09:28:23.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.633 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:29.633 Verification LBA range: start 0x0 length 0x4000 00:26:29.633 Nvme0n1 : 55.38 6792.42 26.53 0.00 0.00 18819.92 588.33 7046430.72 00:26:29.633 [2024-12-13T09:28:23.523Z] =================================================================================================================== 00:26:29.633 [2024-12-13T09:28:23.523Z] Total : 6792.42 26.53 0.00 0.00 18819.92 588.33 7046430.72 00:26:29.633 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.633 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:29.633 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:29.633 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:26:29.633 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:29.633 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:26:29.633 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:29.633 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:26:29.633 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:29.633 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:29.633 rmmod nvme_tcp 00:26:29.633 rmmod nvme_fabrics 00:26:29.633 rmmod nvme_keyring 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 88816 ']' 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 88816 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 88816 ']' 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 88816 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88816 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:29.892 killing process with pid 88816 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88816' 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 88816 00:26:29.892 09:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 88816 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:30.830 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:26:31.089 00:26:31.089 real 1m3.114s 00:26:31.089 user 2m54.387s 00:26:31.089 sys 0m17.355s 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:31.089 ************************************ 00:26:31.089 END TEST nvmf_host_multipath 00:26:31.089 ************************************ 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.089 ************************************ 00:26:31.089 START TEST nvmf_timeout 00:26:31.089 ************************************ 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:31.089 * Looking for test storage... 00:26:31.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:31.089 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:31.353 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:31.353 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.353 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.353 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.353 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.353 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.353 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.353 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:26:31.354 09:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:31.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.354 --rc genhtml_branch_coverage=1 00:26:31.354 --rc genhtml_function_coverage=1 00:26:31.354 --rc genhtml_legend=1 00:26:31.354 --rc geninfo_all_blocks=1 00:26:31.354 --rc geninfo_unexecuted_blocks=1 00:26:31.354 00:26:31.354 ' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:31.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.354 --rc genhtml_branch_coverage=1 00:26:31.354 --rc genhtml_function_coverage=1 00:26:31.354 --rc genhtml_legend=1 00:26:31.354 --rc geninfo_all_blocks=1 00:26:31.354 --rc geninfo_unexecuted_blocks=1 00:26:31.354 00:26:31.354 ' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:31.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.354 --rc genhtml_branch_coverage=1 00:26:31.354 --rc genhtml_function_coverage=1 00:26:31.354 --rc genhtml_legend=1 00:26:31.354 --rc geninfo_all_blocks=1 00:26:31.354 --rc geninfo_unexecuted_blocks=1 00:26:31.354 00:26:31.354 ' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:31.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.354 --rc genhtml_branch_coverage=1 00:26:31.354 --rc genhtml_function_coverage=1 00:26:31.354 --rc genhtml_legend=1 00:26:31.354 --rc geninfo_all_blocks=1 00:26:31.354 --rc geninfo_unexecuted_blocks=1 00:26:31.354 00:26:31.354 ' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.354 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:31.354 Cannot find device "nvmf_init_br" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:31.354 Cannot find device "nvmf_init_br2" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:31.354 Cannot find device "nvmf_tgt_br" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:31.354 Cannot find device "nvmf_tgt_br2" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:31.354 Cannot find device "nvmf_init_br" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:31.354 Cannot find device "nvmf_init_br2" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:31.354 Cannot find device "nvmf_tgt_br" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:31.354 Cannot find device "nvmf_tgt_br2" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:31.354 Cannot find device "nvmf_br" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:31.354 Cannot find device "nvmf_init_if" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:31.354 Cannot find device "nvmf_init_if2" 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:26:31.354 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:31.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:31.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:31.355 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:31.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:31.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:26:31.617 00:26:31.617 --- 10.0.0.3 ping statistics --- 00:26:31.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.617 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:31.617 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:31.617 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:26:31.617 00:26:31.617 --- 10.0.0.4 ping statistics --- 00:26:31.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.617 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:31.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:31.617 00:26:31.617 --- 10.0.0.1 ping statistics --- 00:26:31.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.617 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:31.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:26:31.617 00:26:31.617 --- 10.0.0.2 ping statistics --- 00:26:31.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.617 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=90041 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 90041 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 90041 ']' 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.617 09:28:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.876 [2024-12-13 09:28:25.527197] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:31.876 [2024-12-13 09:28:25.527387] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.876 [2024-12-13 09:28:25.707814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:32.135 [2024-12-13 09:28:25.790136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.135 [2024-12-13 09:28:25.790194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.135 [2024-12-13 09:28:25.790226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.135 [2024-12-13 09:28:25.790250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.135 [2024-12-13 09:28:25.790262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.135 [2024-12-13 09:28:25.791954] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.135 [2024-12-13 09:28:25.791972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.135 [2024-12-13 09:28:25.939772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:32.703 09:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.703 09:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:32.703 09:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.703 09:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.703 09:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.703 09:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.703 09:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:32.703 09:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:32.961 [2024-12-13 09:28:26.824727] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.961 09:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:33.529 Malloc0 00:26:33.529 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:33.529 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:33.788 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:34.047 [2024-12-13 09:28:27.785780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:34.047 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=90095 00:26:34.047 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 90095 /var/tmp/bdevperf.sock 00:26:34.047 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 90095 ']' 00:26:34.047 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:34.047 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:34.047 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:34.047 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.047 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.047 09:28:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:34.047 [2024-12-13 09:28:27.892650] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:34.047 [2024-12-13 09:28:27.892791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90095 ] 00:26:34.306 [2024-12-13 09:28:28.057607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.306 [2024-12-13 09:28:28.145409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.565 [2024-12-13 09:28:28.298632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:35.133 09:28:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.133 09:28:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:35.133 09:28:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:35.392 09:28:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:35.650 NVMe0n1 00:26:35.650 09:28:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=90114 00:26:35.651 09:28:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:35.651 09:28:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:26:35.651 Running I/O for 10 seconds... 00:26:36.587 09:28:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:36.849 6420.00 IOPS, 25.08 MiB/s [2024-12-13T09:28:30.739Z] [2024-12-13 09:28:30.680920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.849 [2024-12-13 09:28:30.681809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.849 [2024-12-13 09:28:30.681822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.681839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.681852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.681868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.681880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.681896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.681908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.681924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.681937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.681953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.681965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.681981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.681994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.682978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.682991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.850 [2024-12-13 09:28:30.683007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.850 [2024-12-13 09:28:30.683020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.851 [2024-12-13 09:28:30.683053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.851 [2024-12-13 09:28:30.683081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.851 [2024-12-13 09:28:30.683110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.851 [2024-12-13 09:28:30.683139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.851 [2024-12-13 09:28:30.683185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.851 [2024-12-13 09:28:30.683691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.851 [2024-12-13 09:28:30.683720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.851 [2024-12-13 09:28:30.683965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.683981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.683994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.684010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.684023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.684041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.684053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.684069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.684082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.684098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.684111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.684128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.684140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.684157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.684170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.684186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.684198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.684215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.684227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.684243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.851 [2024-12-13 09:28:30.684256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.851 [2024-12-13 09:28:30.684275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.852 [2024-12-13 09:28:30.684975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.684991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:36.852 [2024-12-13 09:28:30.685009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:36.852 [2024-12-13 09:28:30.685024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:36.852 [2024-12-13 09:28:30.685036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59984 len:8 PRP1 0x0 PRP2 0x0 00:26:36.852 [2024-12-13 09:28:30.685051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.685407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.852 [2024-12-13 09:28:30.685432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.685450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.852 [2024-12-13 09:28:30.685463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.685478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.852 [2024-12-13 09:28:30.685490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.685504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.852 [2024-12-13 09:28:30.685516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.852 [2024-12-13 09:28:30.685530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:36.852 [2024-12-13 09:28:30.685773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:36.852 [2024-12-13 09:28:30.685822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:36.852 [2024-12-13 09:28:30.685966] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.852 [2024-12-13 09:28:30.685998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:36.852 [2024-12-13 09:28:30.686017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:36.852 [2024-12-13 09:28:30.686045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:36.852 [2024-12-13 09:28:30.686097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:36.852 [2024-12-13 09:28:30.686116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:36.852 [2024-12-13 09:28:30.686135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:36.852 [2024-12-13 09:28:30.686150] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:36.852 [2024-12-13 09:28:30.686168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:36.852 09:28:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:38.723 3722.50 IOPS, 14.54 MiB/s [2024-12-13T09:28:32.873Z] 2481.67 IOPS, 9.69 MiB/s [2024-12-13T09:28:32.873Z] [2024-12-13 09:28:32.686366] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.983 [2024-12-13 09:28:32.686438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:38.983 [2024-12-13 09:28:32.686463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:38.983 [2024-12-13 09:28:32.686497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:38.983 [2024-12-13 09:28:32.686532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:38.983 [2024-12-13 09:28:32.686547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:38.983 [2024-12-13 09:28:32.686563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:38.983 [2024-12-13 09:28:32.686580] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:38.983 [2024-12-13 09:28:32.686596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:38.983 09:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:38.983 09:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:38.983 09:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:39.242 09:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:39.242 09:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:39.242 09:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:39.242 09:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:39.501 09:28:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:39.501 09:28:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:40.696 1861.25 IOPS, 7.27 MiB/s [2024-12-13T09:28:34.845Z] 1489.00 IOPS, 5.82 MiB/s [2024-12-13T09:28:34.845Z] [2024-12-13 09:28:34.686789] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.955 [2024-12-13 09:28:34.686902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:40.955 [2024-12-13 09:28:34.686927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:40.955 [2024-12-13 09:28:34.686962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:40.955 [2024-12-13 09:28:34.686994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:40.955 [2024-12-13 09:28:34.687009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:40.955 [2024-12-13 09:28:34.687025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:40.955 [2024-12-13 09:28:34.687042] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:40.955 [2024-12-13 09:28:34.687059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:42.862 1240.83 IOPS, 4.85 MiB/s [2024-12-13T09:28:36.752Z] 1063.57 IOPS, 4.15 MiB/s [2024-12-13T09:28:36.752Z] [2024-12-13 09:28:36.687114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:42.862 [2024-12-13 09:28:36.687200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:42.862 [2024-12-13 09:28:36.687217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:42.862 [2024-12-13 09:28:36.687247] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:26:42.862 [2024-12-13 09:28:36.687264] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:44.057 930.62 IOPS, 3.64 MiB/s 00:26:44.057 Latency(us) 00:26:44.057 [2024-12-13T09:28:37.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.057 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:44.057 Verification LBA range: start 0x0 length 0x4000 00:26:44.057 NVMe0n1 : 8.17 910.92 3.56 15.66 0.00 137906.95 4170.47 7015926.69 00:26:44.057 [2024-12-13T09:28:37.948Z] =================================================================================================================== 00:26:44.058 [2024-12-13T09:28:37.948Z] Total : 910.92 3.56 15.66 0.00 137906.95 4170.47 7015926.69 00:26:44.058 { 00:26:44.058 "results": [ 00:26:44.058 { 00:26:44.058 "job": "NVMe0n1", 00:26:44.058 "core_mask": "0x4", 00:26:44.058 "workload": "verify", 00:26:44.058 "status": "finished", 00:26:44.058 "verify_range": { 00:26:44.058 "start": 0, 00:26:44.058 "length": 16384 00:26:44.058 }, 00:26:44.058 "queue_depth": 128, 00:26:44.058 "io_size": 4096, 00:26:44.058 "runtime": 8.173022, 00:26:44.058 "iops": 910.9237684665476, 00:26:44.058 "mibps": 3.5582959705724515, 00:26:44.058 "io_failed": 128, 00:26:44.058 "io_timeout": 0, 00:26:44.058 "avg_latency_us": 137906.9521222525, 00:26:44.058 "min_latency_us": 4170.472727272727, 00:26:44.058 "max_latency_us": 7015926.69090909 00:26:44.058 } 00:26:44.058 ], 00:26:44.058 "core_count": 1 00:26:44.058 } 00:26:44.625 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:44.625 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:44.625 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:44.884 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:44.884 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:44.884 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:44.884 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 90114 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 90095 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 90095 ']' 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 90095 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90095 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:45.143 killing process with pid 90095 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90095' 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 90095 00:26:45.143 Received shutdown signal, test time was about 9.335801 seconds 00:26:45.143 00:26:45.143 Latency(us) 00:26:45.143 [2024-12-13T09:28:39.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.143 [2024-12-13T09:28:39.033Z] =================================================================================================================== 00:26:45.143 [2024-12-13T09:28:39.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.143 09:28:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 90095 00:26:46.081 09:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:46.081 [2024-12-13 09:28:39.914964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:46.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:46.081 09:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=90244 00:26:46.081 09:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 90244 /var/tmp/bdevperf.sock 00:26:46.081 09:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 90244 ']' 00:26:46.081 09:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:46.081 09:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.081 09:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:46.081 09:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:46.081 09:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.081 09:28:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.340 [2024-12-13 09:28:40.045587] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:46.340 [2024-12-13 09:28:40.045779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90244 ] 00:26:46.340 [2024-12-13 09:28:40.220159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.599 [2024-12-13 09:28:40.306614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.599 [2024-12-13 09:28:40.466206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:47.167 09:28:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.167 09:28:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:47.167 09:28:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:47.426 09:28:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:47.685 NVMe0n1 00:26:47.685 09:28:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:47.685 09:28:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=90262 00:26:47.685 09:28:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:47.944 Running I/O for 10 seconds... 00:26:48.884 09:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:48.884 6550.00 IOPS, 25.59 MiB/s [2024-12-13T09:28:42.774Z] [2024-12-13 09:28:42.748436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.884 [2024-12-13 09:28:42.748519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.748976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.748991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.884 [2024-12-13 09:28:42.749692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.884 [2024-12-13 09:28:42.749707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.749721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.749737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.749752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.749767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.749782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.749798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.749813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.749828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.749843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.749858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.749873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.749888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.749903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.749920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.749935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.749950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.749965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.749997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.750976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.885 [2024-12-13 09:28:42.750992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.885 [2024-12-13 09:28:42.751008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.751980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.751998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.752014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.752029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.752044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.752080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.752096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.886 [2024-12-13 09:28:42.752114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.752129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.886 [2024-12-13 09:28:42.752145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.752160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.886 [2024-12-13 09:28:42.752176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.752191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.886 [2024-12-13 09:28:42.752207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.752222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.886 [2024-12-13 09:28:42.752237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.752252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.886 [2024-12-13 09:28:42.752269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.752297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.886 [2024-12-13 09:28:42.752330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.886 [2024-12-13 09:28:42.752346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.886 [2024-12-13 09:28:42.752362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.752378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.887 [2024-12-13 09:28:42.752393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.752425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.887 [2024-12-13 09:28:42.752442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.752458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.887 [2024-12-13 09:28:42.752474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.752490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.887 [2024-12-13 09:28:42.752506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.752522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.887 [2024-12-13 09:28:42.752538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.752554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.887 [2024-12-13 09:28:42.752574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.752591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.887 [2024-12-13 09:28:42.752607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.752624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.887 [2024-12-13 09:28:42.752640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.752655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.887 [2024-12-13 09:28:42.752671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.752686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:48.887 [2024-12-13 09:28:42.752722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.887 [2024-12-13 09:28:42.752735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.887 [2024-12-13 09:28:42.752751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:26:48.887 [2024-12-13 09:28:42.752765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.753111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.887 [2024-12-13 09:28:42.753152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.753171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.887 [2024-12-13 09:28:42.753189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.753203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.887 [2024-12-13 09:28:42.753219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.753233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.887 [2024-12-13 09:28:42.753248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.887 [2024-12-13 09:28:42.753261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:48.887 [2024-12-13 09:28:42.753581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.887 [2024-12-13 09:28:42.753655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:48.887 [2024-12-13 09:28:42.753795] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.887 [2024-12-13 09:28:42.753833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:48.887 [2024-12-13 09:28:42.753850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:48.887 [2024-12-13 09:28:42.753883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:48.887 [2024-12-13 09:28:42.753909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.887 [2024-12-13 09:28:42.753927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.887 [2024-12-13 09:28:42.753942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.887 [2024-12-13 09:28:42.753962] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.887 [2024-12-13 09:28:42.753978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.146 09:28:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:49.973 3850.50 IOPS, 15.04 MiB/s [2024-12-13T09:28:43.863Z] [2024-12-13 09:28:43.754117] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.973 [2024-12-13 09:28:43.754209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:49.973 [2024-12-13 09:28:43.754232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:49.973 [2024-12-13 09:28:43.754268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:49.973 [2024-12-13 09:28:43.754323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.973 [2024-12-13 09:28:43.754344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.973 [2024-12-13 09:28:43.754360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.973 [2024-12-13 09:28:43.754379] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.973 [2024-12-13 09:28:43.754394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.973 09:28:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:50.232 [2024-12-13 09:28:44.017591] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:50.232 09:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 90262 00:26:51.058 2567.00 IOPS, 10.03 MiB/s [2024-12-13T09:28:44.948Z] [2024-12-13 09:28:44.774619] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:52.931 1925.25 IOPS, 7.52 MiB/s [2024-12-13T09:28:47.758Z] 2986.00 IOPS, 11.66 MiB/s [2024-12-13T09:28:48.694Z] 3954.17 IOPS, 15.45 MiB/s [2024-12-13T09:28:49.631Z] 4648.43 IOPS, 18.16 MiB/s [2024-12-13T09:28:51.009Z] 5168.88 IOPS, 20.19 MiB/s [2024-12-13T09:28:51.946Z] 5570.89 IOPS, 21.76 MiB/s [2024-12-13T09:28:51.946Z] 5899.90 IOPS, 23.05 MiB/s 00:26:58.056 Latency(us) 00:26:58.056 [2024-12-13T09:28:51.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.056 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:58.056 Verification LBA range: start 0x0 length 0x4000 00:26:58.056 NVMe0n1 : 10.01 5907.73 23.08 0.00 0.00 21621.04 1474.56 3019898.88 00:26:58.056 [2024-12-13T09:28:51.946Z] =================================================================================================================== 00:26:58.056 [2024-12-13T09:28:51.946Z] Total : 5907.73 23.08 0.00 0.00 21621.04 1474.56 3019898.88 00:26:58.056 { 00:26:58.056 "results": [ 00:26:58.056 { 00:26:58.056 "job": "NVMe0n1", 00:26:58.056 "core_mask": "0x4", 00:26:58.056 "workload": "verify", 00:26:58.056 "status": "finished", 00:26:58.056 "verify_range": { 00:26:58.056 "start": 0, 00:26:58.056 "length": 16384 00:26:58.056 }, 00:26:58.056 "queue_depth": 128, 00:26:58.056 "io_size": 4096, 00:26:58.056 "runtime": 10.008406, 00:26:58.056 "iops": 5907.733958834204, 00:26:58.056 "mibps": 23.07708577669611, 00:26:58.056 "io_failed": 0, 00:26:58.056 "io_timeout": 0, 00:26:58.056 "avg_latency_us": 21621.042732238922, 00:26:58.056 "min_latency_us": 1474.56, 00:26:58.056 "max_latency_us": 3019898.88 00:26:58.056 } 00:26:58.056 ], 00:26:58.056 "core_count": 1 00:26:58.056 } 00:26:58.056 09:28:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=90367 00:26:58.056 09:28:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:58.056 09:28:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:58.056 Running I/O for 10 seconds... 00:26:58.996 09:28:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:58.996 6549.00 IOPS, 25.58 MiB/s [2024-12-13T09:28:52.886Z] [2024-12-13 09:28:52.861085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.861994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.862006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.862017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.862031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.862041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.862053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.862064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.996 [2024-12-13 09:28:52.862076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:58.997 [2024-12-13 09:28:52.862610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.862650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.862696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.862712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.862727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.862740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.862755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.862768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.862782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.862795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.862809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.862822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.862836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.862849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.862906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.862921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.862936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.862949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.862964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.862978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.862993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.997 [2024-12-13 09:28:52.863377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.997 [2024-12-13 09:28:52.863393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.863978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.863991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.998 [2024-12-13 09:28:52.864516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.998 [2024-12-13 09:28:52.864531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.864973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.864988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.999 [2024-12-13 09:28:52.865641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.999 [2024-12-13 09:28:52.865653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-12-13 09:28:52.865680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-12-13 09:28:52.865707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-12-13 09:28:52.865733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-12-13 09:28:52.865759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-12-13 09:28:52.865786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-12-13 09:28:52.865813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-12-13 09:28:52.865840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-12-13 09:28:52.865871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.865898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.865925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.865951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.865981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.865995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:59.000 [2024-12-13 09:28:52.866270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.000 [2024-12-13 09:28:52.866314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:26:59.000 [2024-12-13 09:28:52.866343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:59.000 [2024-12-13 09:28:52.866355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:59.000 [2024-12-13 09:28:52.866367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59552 len:8 PRP1 0x0 PRP2 0x0 00:26:59.000 [2024-12-13 09:28:52.866380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.000 [2024-12-13 09:28:52.866926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:59.000 [2024-12-13 09:28:52.867044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:59.000 [2024-12-13 09:28:52.867206] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.000 [2024-12-13 09:28:52.867251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:59.000 [2024-12-13 09:28:52.867266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:59.000 [2024-12-13 09:28:52.867304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:59.000 [2024-12-13 09:28:52.867342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:59.000 [2024-12-13 09:28:52.867356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:59.000 [2024-12-13 09:28:52.867388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:59.000 [2024-12-13 09:28:52.867404] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:59.000 [2024-12-13 09:28:52.867419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:59.259 09:28:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:27:00.196 3666.00 IOPS, 14.32 MiB/s [2024-12-13T09:28:54.086Z] [2024-12-13 09:28:53.867574] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.196 [2024-12-13 09:28:53.867651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:00.196 [2024-12-13 09:28:53.867671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:00.196 [2024-12-13 09:28:53.867703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:00.196 [2024-12-13 09:28:53.867730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:27:00.196 [2024-12-13 09:28:53.867743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:27:00.196 [2024-12-13 09:28:53.867756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:00.196 [2024-12-13 09:28:53.867771] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:27:00.196 [2024-12-13 09:28:53.867785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:01.130 2444.00 IOPS, 9.55 MiB/s [2024-12-13T09:28:55.020Z] [2024-12-13 09:28:54.867924] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.130 [2024-12-13 09:28:54.867999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:01.130 [2024-12-13 09:28:54.868020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:01.130 [2024-12-13 09:28:54.868050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:01.130 [2024-12-13 09:28:54.868076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:27:01.130 [2024-12-13 09:28:54.868090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:27:01.130 [2024-12-13 09:28:54.868104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:01.130 [2024-12-13 09:28:54.868118] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:27:01.130 [2024-12-13 09:28:54.868132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:02.064 1833.00 IOPS, 7.16 MiB/s [2024-12-13T09:28:55.954Z] [2024-12-13 09:28:55.871225] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.064 [2024-12-13 09:28:55.871339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:02.064 [2024-12-13 09:28:55.871361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:02.064 [2024-12-13 09:28:55.871600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:02.064 [2024-12-13 09:28:55.871874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:27:02.064 [2024-12-13 09:28:55.871900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:27:02.064 [2024-12-13 09:28:55.871916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:02.064 [2024-12-13 09:28:55.871931] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:27:02.064 [2024-12-13 09:28:55.871946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:02.064 09:28:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:02.323 [2024-12-13 09:28:56.141647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:02.323 09:28:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 90367 00:27:03.150 1466.40 IOPS, 5.73 MiB/s [2024-12-13T09:28:57.040Z] [2024-12-13 09:28:56.901790] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:27:05.053 2439.17 IOPS, 9.53 MiB/s [2024-12-13T09:28:59.882Z] 3352.43 IOPS, 13.10 MiB/s [2024-12-13T09:29:00.819Z] 4033.38 IOPS, 15.76 MiB/s [2024-12-13T09:29:01.753Z] 4558.56 IOPS, 17.81 MiB/s [2024-12-13T09:29:02.012Z] 4992.30 IOPS, 19.50 MiB/s 00:27:08.122 Latency(us) 00:27:08.122 [2024-12-13T09:29:02.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.122 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:08.122 Verification LBA range: start 0x0 length 0x4000 00:27:08.122 NVMe0n1 : 10.01 4999.86 19.53 4039.56 0.00 14123.40 722.39 3019898.88 00:27:08.122 [2024-12-13T09:29:02.012Z] =================================================================================================================== 00:27:08.122 [2024-12-13T09:29:02.012Z] Total : 4999.86 19.53 4039.56 0.00 14123.40 0.00 3019898.88 00:27:08.122 { 00:27:08.122 "results": [ 00:27:08.122 { 00:27:08.122 "job": "NVMe0n1", 00:27:08.122 "core_mask": "0x4", 00:27:08.122 "workload": "verify", 00:27:08.122 "status": "finished", 00:27:08.122 "verify_range": { 00:27:08.122 "start": 0, 00:27:08.122 "length": 16384 00:27:08.122 }, 00:27:08.122 "queue_depth": 128, 00:27:08.122 "io_size": 4096, 00:27:08.122 "runtime": 10.01049, 00:27:08.122 "iops": 4999.855151945609, 00:27:08.122 "mibps": 19.530684187287534, 00:27:08.122 "io_failed": 40438, 00:27:08.122 "io_timeout": 0, 00:27:08.122 "avg_latency_us": 14123.395276452487, 00:27:08.122 "min_latency_us": 722.3854545454545, 00:27:08.122 "max_latency_us": 3019898.88 00:27:08.122 } 00:27:08.122 ], 00:27:08.122 "core_count": 1 00:27:08.122 } 00:27:08.122 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 90244 00:27:08.122 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 90244 ']' 00:27:08.122 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 90244 00:27:08.122 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:27:08.122 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.122 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90244 00:27:08.122 killing process with pid 90244 00:27:08.123 Received shutdown signal, test time was about 10.000000 seconds 00:27:08.123 00:27:08.123 Latency(us) 00:27:08.123 [2024-12-13T09:29:02.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.123 [2024-12-13T09:29:02.013Z] =================================================================================================================== 00:27:08.123 [2024-12-13T09:29:02.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:08.123 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:08.123 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:08.123 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90244' 00:27:08.123 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 90244 00:27:08.123 09:29:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 90244 00:27:09.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.060 09:29:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=90492 00:27:09.060 09:29:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:27:09.060 09:29:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 90492 /var/tmp/bdevperf.sock 00:27:09.060 09:29:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 90492 ']' 00:27:09.060 09:29:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.060 09:29:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.060 09:29:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.060 09:29:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.060 09:29:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.060 [2024-12-13 09:29:02.781114] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:27:09.060 [2024-12-13 09:29:02.781580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90492 ] 00:27:09.318 [2024-12-13 09:29:02.962060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.318 [2024-12-13 09:29:03.047875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.318 [2024-12-13 09:29:03.205110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:09.885 09:29:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.885 09:29:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:09.885 09:29:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=90508 00:27:09.885 09:29:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90492 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:27:09.885 09:29:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:27:10.144 09:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:10.712 NVMe0n1 00:27:10.712 09:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=90544 00:27:10.712 09:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:10.712 09:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:27:10.712 Running I/O for 10 seconds... 00:27:11.647 09:29:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:11.909 13462.00 IOPS, 52.59 MiB/s [2024-12-13T09:29:05.799Z] [2024-12-13 09:29:05.579230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.909 [2024-12-13 09:29:05.579764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.579987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.910 [2024-12-13 09:29:05.580863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.580874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.580886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.580896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.580907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.580917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.580930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.580940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.580952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.580962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.580974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:11.911 [2024-12-13 09:29:05.581049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.581984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.581997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.582013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.582026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.582043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.582056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.582072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.582088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.582108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.582121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.582138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.911 [2024-12-13 09:29:05.582151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.911 [2024-12-13 09:29:05.582167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.582966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.582985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.912 [2024-12-13 09:29:05.583595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.912 [2024-12-13 09:29:05.583624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.583642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.583655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.583673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.583687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.583705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.583718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.583736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.583750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.583768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.583783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.583804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.584219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.584481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.584684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.584834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.584981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.585076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.585262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.585391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.585564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.585730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.585903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.586063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.586210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.586313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.586500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.586594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.586726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.586800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.586976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.587066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.587153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.587336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.587522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.587683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.587828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.587978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.588979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.913 [2024-12-13 09:29:05.588998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.913 [2024-12-13 09:29:05.589011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.914 [2024-12-13 09:29:05.589491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:27:11.914 [2024-12-13 09:29:05.589528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:11.914 [2024-12-13 09:29:05.589544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:11.914 [2024-12-13 09:29:05.589558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126752 len:8 PRP1 0x0 PRP2 0x0 00:27:11.914 [2024-12-13 09:29:05.589581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.914 [2024-12-13 09:29:05.589969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.589989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.914 [2024-12-13 09:29:05.590003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.590019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.914 [2024-12-13 09:29:05.590032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.590063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.914 [2024-12-13 09:29:05.590076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.914 [2024-12-13 09:29:05.590090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:11.914 [2024-12-13 09:29:05.590594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:11.914 [2024-12-13 09:29:05.590824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:11.914 [2024-12-13 09:29:05.591213] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.914 [2024-12-13 09:29:05.591371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:11.914 [2024-12-13 09:29:05.591612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:11.914 [2024-12-13 09:29:05.591826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:11.914 [2024-12-13 09:29:05.592001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:11.914 [2024-12-13 09:29:05.592196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:11.914 [2024-12-13 09:29:05.592351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:11.914 [2024-12-13 09:29:05.592416] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:11.914 [2024-12-13 09:29:05.592666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:11.914 09:29:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 90544 00:27:13.787 7557.50 IOPS, 29.52 MiB/s [2024-12-13T09:29:07.677Z] 5038.33 IOPS, 19.68 MiB/s [2024-12-13T09:29:07.677Z] [2024-12-13 09:29:07.593018] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.787 [2024-12-13 09:29:07.593095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:13.787 [2024-12-13 09:29:07.593122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:13.787 [2024-12-13 09:29:07.593159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:13.787 [2024-12-13 09:29:07.593190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:13.787 [2024-12-13 09:29:07.593205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:13.787 [2024-12-13 09:29:07.593221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:13.787 [2024-12-13 09:29:07.593253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:13.787 [2024-12-13 09:29:07.593273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:15.660 3778.75 IOPS, 14.76 MiB/s [2024-12-13T09:29:09.809Z] 3023.00 IOPS, 11.81 MiB/s [2024-12-13T09:29:09.809Z] [2024-12-13 09:29:09.593505] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.919 [2024-12-13 09:29:09.593569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:15.919 [2024-12-13 09:29:09.593597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:15.919 [2024-12-13 09:29:09.593634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:15.919 [2024-12-13 09:29:09.593666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:15.919 [2024-12-13 09:29:09.593695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:15.919 [2024-12-13 09:29:09.593715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:15.919 [2024-12-13 09:29:09.593733] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:15.919 [2024-12-13 09:29:09.593751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:17.791 2519.17 IOPS, 9.84 MiB/s [2024-12-13T09:29:11.681Z] 2159.29 IOPS, 8.43 MiB/s [2024-12-13T09:29:11.681Z] [2024-12-13 09:29:11.593843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:17.791 [2024-12-13 09:29:11.593914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:17.791 [2024-12-13 09:29:11.593930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:17.791 [2024-12-13 09:29:11.593946] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:27:17.791 [2024-12-13 09:29:11.593963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:18.727 1889.38 IOPS, 7.38 MiB/s 00:27:18.727 Latency(us) 00:27:18.727 [2024-12-13T09:29:12.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.728 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:18.728 NVMe0n1 : 8.15 1855.45 7.25 15.71 0.00 68471.10 8877.15 7046430.72 00:27:18.728 [2024-12-13T09:29:12.618Z] =================================================================================================================== 00:27:18.728 [2024-12-13T09:29:12.618Z] Total : 1855.45 7.25 15.71 0.00 68471.10 8877.15 7046430.72 00:27:18.728 { 00:27:18.728 "results": [ 00:27:18.728 { 00:27:18.728 "job": "NVMe0n1", 00:27:18.728 "core_mask": "0x4", 00:27:18.728 "workload": "randread", 00:27:18.728 "status": "finished", 00:27:18.728 "queue_depth": 128, 00:27:18.728 "io_size": 4096, 00:27:18.728 "runtime": 8.146267, 00:27:18.728 "iops": 1855.4510918927651, 00:27:18.728 "mibps": 7.247855827706114, 00:27:18.728 "io_failed": 128, 00:27:18.728 "io_timeout": 0, 00:27:18.728 "avg_latency_us": 68471.10390343108, 00:27:18.728 "min_latency_us": 8877.149090909092, 00:27:18.728 "max_latency_us": 7046430.72 00:27:18.728 } 00:27:18.728 ], 00:27:18.728 "core_count": 1 00:27:18.728 } 00:27:18.728 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:18.987 Attaching 5 probes... 00:27:18.987 1411.675397: reset bdev controller NVMe0 00:27:18.987 1412.190066: reconnect bdev controller NVMe0 00:27:18.987 3413.982143: reconnect delay bdev controller NVMe0 00:27:18.987 3414.018020: reconnect bdev controller NVMe0 00:27:18.987 5414.462562: reconnect delay bdev controller NVMe0 00:27:18.987 5414.499273: reconnect bdev controller NVMe0 00:27:18.987 7414.905594: reconnect delay bdev controller NVMe0 00:27:18.987 7414.941537: reconnect bdev controller NVMe0 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 90508 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 90492 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 90492 ']' 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 90492 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90492 00:27:18.987 killing process with pid 90492 00:27:18.987 Received shutdown signal, test time was about 8.221813 seconds 00:27:18.987 00:27:18.987 Latency(us) 00:27:18.987 [2024-12-13T09:29:12.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.987 [2024-12-13T09:29:12.877Z] =================================================================================================================== 00:27:18.987 [2024-12-13T09:29:12.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90492' 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 90492 00:27:18.987 09:29:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 90492 00:27:19.925 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.925 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:19.925 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:27:19.925 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:19.925 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:27:19.925 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:19.925 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:27:19.925 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:19.925 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:19.925 rmmod nvme_tcp 00:27:20.184 rmmod nvme_fabrics 00:27:20.184 rmmod nvme_keyring 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 90041 ']' 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 90041 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 90041 ']' 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 90041 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90041 00:27:20.184 killing process with pid 90041 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90041' 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 90041 00:27:20.184 09:29:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 90041 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:21.121 09:29:14 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:27:21.383 00:27:21.383 real 0m50.249s 00:27:21.383 user 2m26.456s 00:27:21.383 sys 0m5.485s 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:21.383 ************************************ 00:27:21.383 END TEST nvmf_timeout 00:27:21.383 ************************************ 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:21.383 00:27:21.383 real 6m23.351s 00:27:21.383 user 17m45.043s 00:27:21.383 sys 1m16.246s 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.383 09:29:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.383 ************************************ 00:27:21.383 END TEST nvmf_host 00:27:21.383 ************************************ 00:27:21.383 09:29:15 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:21.383 09:29:15 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:27:21.383 ************************************ 00:27:21.383 END TEST nvmf_tcp 00:27:21.383 ************************************ 00:27:21.383 00:27:21.383 real 17m2.605s 00:27:21.383 user 44m20.097s 00:27:21.383 sys 4m5.867s 00:27:21.383 09:29:15 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.383 09:29:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.383 09:29:15 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:27:21.383 09:29:15 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:21.383 09:29:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:21.383 09:29:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.383 09:29:15 -- common/autotest_common.sh@10 -- # set +x 00:27:21.383 ************************************ 00:27:21.383 START TEST nvmf_dif 00:27:21.383 ************************************ 00:27:21.383 09:29:15 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:21.642 * Looking for test storage... 00:27:21.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.642 --rc genhtml_branch_coverage=1 00:27:21.642 --rc genhtml_function_coverage=1 00:27:21.642 --rc genhtml_legend=1 00:27:21.642 --rc geninfo_all_blocks=1 00:27:21.642 --rc geninfo_unexecuted_blocks=1 00:27:21.642 00:27:21.642 ' 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.642 --rc genhtml_branch_coverage=1 00:27:21.642 --rc genhtml_function_coverage=1 00:27:21.642 --rc genhtml_legend=1 00:27:21.642 --rc geninfo_all_blocks=1 00:27:21.642 --rc geninfo_unexecuted_blocks=1 00:27:21.642 00:27:21.642 ' 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.642 --rc genhtml_branch_coverage=1 00:27:21.642 --rc genhtml_function_coverage=1 00:27:21.642 --rc genhtml_legend=1 00:27:21.642 --rc geninfo_all_blocks=1 00:27:21.642 --rc geninfo_unexecuted_blocks=1 00:27:21.642 00:27:21.642 ' 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:21.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.642 --rc genhtml_branch_coverage=1 00:27:21.642 --rc genhtml_function_coverage=1 00:27:21.642 --rc genhtml_legend=1 00:27:21.642 --rc geninfo_all_blocks=1 00:27:21.642 --rc geninfo_unexecuted_blocks=1 00:27:21.642 00:27:21.642 ' 00:27:21.642 09:29:15 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.642 09:29:15 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.642 09:29:15 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.642 09:29:15 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.642 09:29:15 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.642 09:29:15 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:21.642 09:29:15 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:21.642 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:21.642 09:29:15 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:21.642 09:29:15 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:21.642 09:29:15 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:21.642 09:29:15 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:21.642 09:29:15 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:21.642 09:29:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:21.642 09:29:15 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:21.643 Cannot find device "nvmf_init_br" 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:21.643 Cannot find device "nvmf_init_br2" 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:21.643 Cannot find device "nvmf_tgt_br" 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@164 -- # true 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:21.643 Cannot find device "nvmf_tgt_br2" 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@165 -- # true 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:21.643 Cannot find device "nvmf_init_br" 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@166 -- # true 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:21.643 Cannot find device "nvmf_init_br2" 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@167 -- # true 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:21.643 Cannot find device "nvmf_tgt_br" 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@168 -- # true 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:21.643 Cannot find device "nvmf_tgt_br2" 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@169 -- # true 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:21.643 Cannot find device "nvmf_br" 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@170 -- # true 00:27:21.643 09:29:15 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:21.901 Cannot find device "nvmf_init_if" 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@171 -- # true 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:21.901 Cannot find device "nvmf_init_if2" 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@172 -- # true 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:21.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@173 -- # true 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:21.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@174 -- # true 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:21.901 09:29:15 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:21.902 09:29:15 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:21.902 09:29:15 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:21.902 09:29:15 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:21.902 09:29:15 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:21.902 09:29:15 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:21.902 09:29:15 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:21.902 09:29:15 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:21.902 09:29:15 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:22.160 09:29:15 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:22.160 09:29:15 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:22.160 09:29:15 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:22.160 09:29:15 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:22.160 09:29:15 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:22.160 09:29:15 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:22.160 09:29:15 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:22.160 09:29:15 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:22.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:22.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:27:22.160 00:27:22.160 --- 10.0.0.3 ping statistics --- 00:27:22.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.160 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:27:22.160 09:29:15 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:22.160 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:22.160 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:27:22.160 00:27:22.160 --- 10.0.0.4 ping statistics --- 00:27:22.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.160 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:27:22.161 09:29:15 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:22.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:27:22.161 00:27:22.161 --- 10.0.0.1 ping statistics --- 00:27:22.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.161 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:27:22.161 09:29:15 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:22.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:27:22.161 00:27:22.161 --- 10.0.0.2 ping statistics --- 00:27:22.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.161 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:27:22.161 09:29:15 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.161 09:29:15 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:27:22.161 09:29:15 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:27:22.161 09:29:15 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:22.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:22.420 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:22.420 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:22.420 09:29:16 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.420 09:29:16 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:22.420 09:29:16 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:22.420 09:29:16 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.420 09:29:16 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:22.420 09:29:16 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:22.420 09:29:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:22.420 09:29:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:22.420 09:29:16 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:22.420 09:29:16 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.420 09:29:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:22.420 09:29:16 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=91055 00:27:22.420 09:29:16 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:22.420 09:29:16 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 91055 00:27:22.420 09:29:16 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 91055 ']' 00:27:22.420 09:29:16 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.420 09:29:16 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.420 09:29:16 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.420 09:29:16 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.420 09:29:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:22.679 [2024-12-13 09:29:16.380349] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:27:22.679 [2024-12-13 09:29:16.380535] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.938 [2024-12-13 09:29:16.571093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.938 [2024-12-13 09:29:16.693985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.938 [2024-12-13 09:29:16.694064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.938 [2024-12-13 09:29:16.694089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.938 [2024-12-13 09:29:16.694120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.938 [2024-12-13 09:29:16.694138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.938 [2024-12-13 09:29:16.695625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.197 [2024-12-13 09:29:16.902540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:23.776 09:29:17 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.776 09:29:17 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:27:23.776 09:29:17 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:23.776 09:29:17 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:23.776 09:29:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:23.776 09:29:17 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.776 09:29:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:23.776 09:29:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:23.776 09:29:17 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.776 09:29:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:23.776 [2024-12-13 09:29:17.400384] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.776 09:29:17 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.776 09:29:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:23.776 09:29:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:23.776 09:29:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.776 09:29:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:23.776 ************************************ 00:27:23.776 START TEST fio_dif_1_default 00:27:23.776 ************************************ 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:23.776 bdev_null0 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:23.776 [2024-12-13 09:29:17.444577] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:23.776 { 00:27:23.776 "params": { 00:27:23.776 "name": "Nvme$subsystem", 00:27:23.776 "trtype": "$TEST_TRANSPORT", 00:27:23.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.776 "adrfam": "ipv4", 00:27:23.776 "trsvcid": "$NVMF_PORT", 00:27:23.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.776 "hdgst": ${hdgst:-false}, 00:27:23.776 "ddgst": ${ddgst:-false} 00:27:23.776 }, 00:27:23.776 "method": "bdev_nvme_attach_controller" 00:27:23.776 } 00:27:23.776 EOF 00:27:23.776 )") 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:27:23.776 09:29:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:23.776 "params": { 00:27:23.776 "name": "Nvme0", 00:27:23.776 "trtype": "tcp", 00:27:23.776 "traddr": "10.0.0.3", 00:27:23.776 "adrfam": "ipv4", 00:27:23.776 "trsvcid": "4420", 00:27:23.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:23.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:23.777 "hdgst": false, 00:27:23.777 "ddgst": false 00:27:23.777 }, 00:27:23.777 "method": "bdev_nvme_attach_controller" 00:27:23.777 }' 00:27:23.777 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:23.777 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:23.777 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:27:23.777 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:23.777 09:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.087 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:24.087 fio-3.35 00:27:24.087 Starting 1 thread 00:27:36.300 00:27:36.300 filename0: (groupid=0, jobs=1): err= 0: pid=91114: Fri Dec 13 09:29:28 2024 00:27:36.300 read: IOPS=7873, BW=30.8MiB/s (32.2MB/s)(308MiB/10001msec) 00:27:36.300 slat (usec): min=7, max=129, avg= 9.89, stdev= 4.43 00:27:36.300 clat (usec): min=399, max=1804, avg=478.10, stdev=47.77 00:27:36.300 lat (usec): min=406, max=1818, avg=487.99, stdev=48.92 00:27:36.300 clat percentiles (usec): 00:27:36.300 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 429], 20.00th=[ 441], 00:27:36.300 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 482], 00:27:36.300 | 70.00th=[ 494], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 562], 00:27:36.300 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 701], 99.95th=[ 766], 00:27:36.300 | 99.99th=[ 1631] 00:27:36.300 bw ( KiB/s): min=29504, max=32288, per=100.00%, avg=31494.74, stdev=629.48, samples=19 00:27:36.300 iops : min= 7376, max= 8072, avg=7873.68, stdev=157.37, samples=19 00:27:36.300 lat (usec) : 500=74.34%, 750=25.61%, 1000=0.02% 00:27:36.300 lat (msec) : 2=0.04% 00:27:36.300 cpu : usr=86.47%, sys=11.73%, ctx=85, majf=0, minf=1061 00:27:36.300 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:36.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.300 issued rwts: total=78740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.300 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:36.300 00:27:36.300 Run status group 0 (all jobs): 00:27:36.300 READ: bw=30.8MiB/s (32.2MB/s), 30.8MiB/s-30.8MiB/s (32.2MB/s-32.2MB/s), io=308MiB (323MB), run=10001-10001msec 00:27:36.300 ----------------------------------------------------- 00:27:36.300 Suppressions used: 00:27:36.300 count bytes template 00:27:36.300 1 8 /usr/src/fio/parse.c 00:27:36.300 1 8 libtcmalloc_minimal.so 00:27:36.300 1 904 libcrypto.so 00:27:36.300 ----------------------------------------------------- 00:27:36.300 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.301 00:27:36.301 real 0m12.245s 00:27:36.301 user 0m10.472s 00:27:36.301 sys 0m1.535s 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.301 ************************************ 00:27:36.301 END TEST fio_dif_1_default 00:27:36.301 ************************************ 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 09:29:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:36.301 09:29:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:36.301 09:29:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 ************************************ 00:27:36.301 START TEST fio_dif_1_multi_subsystems 00:27:36.301 ************************************ 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 bdev_null0 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 [2024-12-13 09:29:29.740474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 bdev_null1 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.301 { 00:27:36.301 "params": { 00:27:36.301 "name": "Nvme$subsystem", 00:27:36.301 "trtype": "$TEST_TRANSPORT", 00:27:36.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.301 "adrfam": "ipv4", 00:27:36.301 "trsvcid": "$NVMF_PORT", 00:27:36.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.301 "hdgst": ${hdgst:-false}, 00:27:36.301 "ddgst": ${ddgst:-false} 00:27:36.301 }, 00:27:36.301 "method": "bdev_nvme_attach_controller" 00:27:36.301 } 00:27:36.301 EOF 00:27:36.301 )") 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:36.301 { 00:27:36.301 "params": { 00:27:36.301 "name": "Nvme$subsystem", 00:27:36.301 "trtype": "$TEST_TRANSPORT", 00:27:36.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.301 "adrfam": "ipv4", 00:27:36.301 "trsvcid": "$NVMF_PORT", 00:27:36.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.301 "hdgst": ${hdgst:-false}, 00:27:36.301 "ddgst": ${ddgst:-false} 00:27:36.301 }, 00:27:36.301 "method": "bdev_nvme_attach_controller" 00:27:36.301 } 00:27:36.301 EOF 00:27:36.301 )") 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:27:36.301 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:36.301 "params": { 00:27:36.301 "name": "Nvme0", 00:27:36.301 "trtype": "tcp", 00:27:36.301 "traddr": "10.0.0.3", 00:27:36.301 "adrfam": "ipv4", 00:27:36.301 "trsvcid": "4420", 00:27:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.302 "hdgst": false, 00:27:36.302 "ddgst": false 00:27:36.302 }, 00:27:36.302 "method": "bdev_nvme_attach_controller" 00:27:36.302 },{ 00:27:36.302 "params": { 00:27:36.302 "name": "Nvme1", 00:27:36.302 "trtype": "tcp", 00:27:36.302 "traddr": "10.0.0.3", 00:27:36.302 "adrfam": "ipv4", 00:27:36.302 "trsvcid": "4420", 00:27:36.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:36.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:36.302 "hdgst": false, 00:27:36.302 "ddgst": false 00:27:36.302 }, 00:27:36.302 "method": "bdev_nvme_attach_controller" 00:27:36.302 }' 00:27:36.302 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:36.302 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:36.302 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:27:36.302 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:36.302 09:29:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:36.302 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:36.302 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:36.302 fio-3.35 00:27:36.302 Starting 2 threads 00:27:48.508 00:27:48.508 filename0: (groupid=0, jobs=1): err= 0: pid=91279: Fri Dec 13 09:29:40 2024 00:27:48.508 read: IOPS=4261, BW=16.6MiB/s (17.5MB/s)(167MiB/10001msec) 00:27:48.508 slat (nsec): min=7748, max=80982, avg=15101.84, stdev=4996.82 00:27:48.508 clat (usec): min=702, max=1295, avg=896.04, stdev=67.52 00:27:48.508 lat (usec): min=711, max=1358, avg=911.15, stdev=68.80 00:27:48.508 clat percentiles (usec): 00:27:48.508 | 1.00th=[ 758], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 840], 00:27:48.508 | 30.00th=[ 865], 40.00th=[ 873], 50.00th=[ 889], 60.00th=[ 906], 00:27:48.508 | 70.00th=[ 922], 80.00th=[ 947], 90.00th=[ 979], 95.00th=[ 1012], 00:27:48.508 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1205], 99.95th=[ 1237], 00:27:48.508 | 99.99th=[ 1270] 00:27:48.508 bw ( KiB/s): min=16768, max=17408, per=50.02%, avg=17056.00, stdev=205.45, samples=19 00:27:48.508 iops : min= 4192, max= 4352, avg=4264.00, stdev=51.36, samples=19 00:27:48.508 lat (usec) : 750=0.75%, 1000=92.57% 00:27:48.508 lat (msec) : 2=6.69% 00:27:48.508 cpu : usr=90.62%, sys=7.98%, ctx=156, majf=0, minf=1061 00:27:48.508 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:48.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.508 issued rwts: total=42624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:48.508 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:48.509 filename1: (groupid=0, jobs=1): err= 0: pid=91280: Fri Dec 13 09:29:40 2024 00:27:48.509 read: IOPS=4261, BW=16.6MiB/s (17.5MB/s)(167MiB/10001msec) 00:27:48.509 slat (nsec): min=7699, max=75209, avg=15302.00, stdev=5408.69 00:27:48.509 clat (usec): min=527, max=1518, avg=894.96, stdev=60.58 00:27:48.509 lat (usec): min=535, max=1555, avg=910.26, stdev=61.66 00:27:48.509 clat percentiles (usec): 00:27:48.509 | 1.00th=[ 799], 5.00th=[ 816], 10.00th=[ 832], 20.00th=[ 848], 00:27:48.509 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 881], 60.00th=[ 898], 00:27:48.509 | 70.00th=[ 914], 80.00th=[ 938], 90.00th=[ 971], 95.00th=[ 1004], 00:27:48.509 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1205], 99.95th=[ 1237], 00:27:48.509 | 99.99th=[ 1287] 00:27:48.509 bw ( KiB/s): min=16768, max=17408, per=50.02%, avg=17056.05, stdev=205.95, samples=19 00:27:48.509 iops : min= 4192, max= 4352, avg=4264.00, stdev=51.50, samples=19 00:27:48.509 lat (usec) : 750=0.01%, 1000=94.62% 00:27:48.509 lat (msec) : 2=5.37% 00:27:48.509 cpu : usr=90.47%, sys=8.12%, ctx=9, majf=0, minf=1075 00:27:48.509 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:48.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.509 issued rwts: total=42624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:48.509 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:48.509 00:27:48.509 Run status group 0 (all jobs): 00:27:48.509 READ: bw=33.3MiB/s (34.9MB/s), 16.6MiB/s-16.6MiB/s (17.5MB/s-17.5MB/s), io=333MiB (349MB), run=10001-10001msec 00:27:48.509 ----------------------------------------------------- 00:27:48.509 Suppressions used: 00:27:48.509 count bytes template 00:27:48.509 2 16 /usr/src/fio/parse.c 00:27:48.509 1 8 libtcmalloc_minimal.so 00:27:48.509 1 904 libcrypto.so 00:27:48.509 ----------------------------------------------------- 00:27:48.509 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.509 00:27:48.509 real 0m12.389s 00:27:48.509 user 0m20.068s 00:27:48.509 sys 0m1.991s 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.509 09:29:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 ************************************ 00:27:48.509 END TEST fio_dif_1_multi_subsystems 00:27:48.509 ************************************ 00:27:48.509 09:29:42 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:48.509 09:29:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:48.509 09:29:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.509 09:29:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 ************************************ 00:27:48.509 START TEST fio_dif_rand_params 00:27:48.509 ************************************ 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 bdev_null0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.509 [2024-12-13 09:29:42.183370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:48.509 { 00:27:48.509 "params": { 00:27:48.509 "name": "Nvme$subsystem", 00:27:48.509 "trtype": "$TEST_TRANSPORT", 00:27:48.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.509 "adrfam": "ipv4", 00:27:48.509 "trsvcid": "$NVMF_PORT", 00:27:48.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.509 "hdgst": ${hdgst:-false}, 00:27:48.509 "ddgst": ${ddgst:-false} 00:27:48.509 }, 00:27:48.509 "method": "bdev_nvme_attach_controller" 00:27:48.509 } 00:27:48.509 EOF 00:27:48.509 )") 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:48.509 09:29:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:48.510 09:29:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:48.510 09:29:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:48.510 "params": { 00:27:48.510 "name": "Nvme0", 00:27:48.510 "trtype": "tcp", 00:27:48.510 "traddr": "10.0.0.3", 00:27:48.510 "adrfam": "ipv4", 00:27:48.510 "trsvcid": "4420", 00:27:48.510 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:48.510 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:48.510 "hdgst": false, 00:27:48.510 "ddgst": false 00:27:48.510 }, 00:27:48.510 "method": "bdev_nvme_attach_controller" 00:27:48.510 }' 00:27:48.510 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:48.510 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:48.510 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:48.510 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:48.510 09:29:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:48.769 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:48.769 ... 00:27:48.769 fio-3.35 00:27:48.769 Starting 3 threads 00:27:55.335 00:27:55.335 filename0: (groupid=0, jobs=1): err= 0: pid=91439: Fri Dec 13 09:29:48 2024 00:27:55.335 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(142MiB/5007msec) 00:27:55.335 slat (nsec): min=5520, max=52225, avg=17151.54, stdev=5123.16 00:27:55.335 clat (usec): min=10776, max=17577, avg=13169.87, stdev=583.23 00:27:55.335 lat (usec): min=10791, max=17611, avg=13187.02, stdev=583.75 00:27:55.335 clat percentiles (usec): 00:27:55.335 | 1.00th=[12518], 5.00th=[12649], 10.00th=[12649], 20.00th=[12780], 00:27:55.335 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:27:55.335 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13829], 95.00th=[14222], 00:27:55.335 | 99.00th=[15008], 99.50th=[15008], 99.90th=[17695], 99.95th=[17695], 00:27:55.335 | 99.99th=[17695] 00:27:55.335 bw ( KiB/s): min=27648, max=29952, per=33.29%, avg=29030.40, stdev=793.19, samples=10 00:27:55.335 iops : min= 216, max= 234, avg=226.80, stdev= 6.20, samples=10 00:27:55.335 lat (msec) : 20=100.00% 00:27:55.335 cpu : usr=92.21%, sys=7.17%, ctx=12, majf=0, minf=1074 00:27:55.335 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.335 issued rwts: total=1137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.335 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:55.335 filename0: (groupid=0, jobs=1): err= 0: pid=91440: Fri Dec 13 09:29:48 2024 00:27:55.335 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(142MiB/5006msec) 00:27:55.335 slat (nsec): min=7907, max=68994, avg=14069.41, stdev=7423.64 00:27:55.335 clat (usec): min=12252, max=15267, avg=13171.39, stdev=531.76 00:27:55.335 lat (usec): min=12260, max=15286, avg=13185.46, stdev=532.54 00:27:55.335 clat percentiles (usec): 00:27:55.335 | 1.00th=[12518], 5.00th=[12649], 10.00th=[12649], 20.00th=[12780], 00:27:55.335 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:27:55.335 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13960], 95.00th=[14222], 00:27:55.335 | 99.00th=[14877], 99.50th=[15270], 99.90th=[15270], 99.95th=[15270], 00:27:55.335 | 99.99th=[15270] 00:27:55.335 bw ( KiB/s): min=27648, max=29952, per=33.29%, avg=29030.40, stdev=705.74, samples=10 00:27:55.335 iops : min= 216, max= 234, avg=226.80, stdev= 5.51, samples=10 00:27:55.335 lat (msec) : 20=100.00% 00:27:55.335 cpu : usr=91.67%, sys=7.67%, ctx=9, majf=0, minf=1062 00:27:55.335 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.335 issued rwts: total=1137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.335 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:55.335 filename0: (groupid=0, jobs=1): err= 0: pid=91441: Fri Dec 13 09:29:48 2024 00:27:55.335 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(142MiB/5005msec) 00:27:55.335 slat (nsec): min=5631, max=49435, avg=17177.01, stdev=5018.94 00:27:55.335 clat (usec): min=10770, max=16004, avg=13165.85, stdev=555.89 00:27:55.335 lat (usec): min=10785, max=16025, avg=13183.03, stdev=556.31 00:27:55.335 clat percentiles (usec): 00:27:55.335 | 1.00th=[12518], 5.00th=[12649], 10.00th=[12649], 20.00th=[12780], 00:27:55.335 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:27:55.335 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13829], 95.00th=[14222], 00:27:55.335 | 99.00th=[15008], 99.50th=[15008], 99.90th=[16057], 99.95th=[16057], 00:27:55.335 | 99.99th=[16057] 00:27:55.335 bw ( KiB/s): min=27648, max=29952, per=33.29%, avg=29030.40, stdev=793.19, samples=10 00:27:55.335 iops : min= 216, max= 234, avg=226.80, stdev= 6.20, samples=10 00:27:55.335 lat (msec) : 20=100.00% 00:27:55.335 cpu : usr=91.81%, sys=7.55%, ctx=18, majf=0, minf=1072 00:27:55.335 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.335 issued rwts: total=1137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.335 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:55.335 00:27:55.335 Run status group 0 (all jobs): 00:27:55.335 READ: bw=85.2MiB/s (89.3MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=426MiB (447MB), run=5005-5007msec 00:27:55.595 ----------------------------------------------------- 00:27:55.595 Suppressions used: 00:27:55.595 count bytes template 00:27:55.595 5 44 /usr/src/fio/parse.c 00:27:55.595 1 8 libtcmalloc_minimal.so 00:27:55.595 1 904 libcrypto.so 00:27:55.595 ----------------------------------------------------- 00:27:55.595 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.595 bdev_null0 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:55.595 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.596 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 [2024-12-13 09:29:49.496349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 bdev_null1 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 bdev_null2 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:55.856 { 00:27:55.856 "params": { 00:27:55.856 "name": "Nvme$subsystem", 00:27:55.856 "trtype": "$TEST_TRANSPORT", 00:27:55.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.856 "adrfam": "ipv4", 00:27:55.856 "trsvcid": "$NVMF_PORT", 00:27:55.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.856 "hdgst": ${hdgst:-false}, 00:27:55.856 "ddgst": ${ddgst:-false} 00:27:55.856 }, 00:27:55.856 "method": "bdev_nvme_attach_controller" 00:27:55.856 } 00:27:55.856 EOF 00:27:55.856 )") 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:55.856 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:55.856 { 00:27:55.856 "params": { 00:27:55.856 "name": "Nvme$subsystem", 00:27:55.856 "trtype": "$TEST_TRANSPORT", 00:27:55.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.856 "adrfam": "ipv4", 00:27:55.856 "trsvcid": "$NVMF_PORT", 00:27:55.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.856 "hdgst": ${hdgst:-false}, 00:27:55.856 "ddgst": ${ddgst:-false} 00:27:55.856 }, 00:27:55.856 "method": "bdev_nvme_attach_controller" 00:27:55.857 } 00:27:55.857 EOF 00:27:55.857 )") 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:55.857 { 00:27:55.857 "params": { 00:27:55.857 "name": "Nvme$subsystem", 00:27:55.857 "trtype": "$TEST_TRANSPORT", 00:27:55.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.857 "adrfam": "ipv4", 00:27:55.857 "trsvcid": "$NVMF_PORT", 00:27:55.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.857 "hdgst": ${hdgst:-false}, 00:27:55.857 "ddgst": ${ddgst:-false} 00:27:55.857 }, 00:27:55.857 "method": "bdev_nvme_attach_controller" 00:27:55.857 } 00:27:55.857 EOF 00:27:55.857 )") 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:55.857 "params": { 00:27:55.857 "name": "Nvme0", 00:27:55.857 "trtype": "tcp", 00:27:55.857 "traddr": "10.0.0.3", 00:27:55.857 "adrfam": "ipv4", 00:27:55.857 "trsvcid": "4420", 00:27:55.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:55.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:55.857 "hdgst": false, 00:27:55.857 "ddgst": false 00:27:55.857 }, 00:27:55.857 "method": "bdev_nvme_attach_controller" 00:27:55.857 },{ 00:27:55.857 "params": { 00:27:55.857 "name": "Nvme1", 00:27:55.857 "trtype": "tcp", 00:27:55.857 "traddr": "10.0.0.3", 00:27:55.857 "adrfam": "ipv4", 00:27:55.857 "trsvcid": "4420", 00:27:55.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.857 "hdgst": false, 00:27:55.857 "ddgst": false 00:27:55.857 }, 00:27:55.857 "method": "bdev_nvme_attach_controller" 00:27:55.857 },{ 00:27:55.857 "params": { 00:27:55.857 "name": "Nvme2", 00:27:55.857 "trtype": "tcp", 00:27:55.857 "traddr": "10.0.0.3", 00:27:55.857 "adrfam": "ipv4", 00:27:55.857 "trsvcid": "4420", 00:27:55.857 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:55.857 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:55.857 "hdgst": false, 00:27:55.857 "ddgst": false 00:27:55.857 }, 00:27:55.857 "method": "bdev_nvme_attach_controller" 00:27:55.857 }' 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:55.857 09:29:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.116 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:56.116 ... 00:27:56.116 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:56.116 ... 00:27:56.116 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:56.116 ... 00:27:56.116 fio-3.35 00:27:56.116 Starting 24 threads 00:28:08.327 00:28:08.327 filename0: (groupid=0, jobs=1): err= 0: pid=91542: Fri Dec 13 09:30:00 2024 00:28:08.327 read: IOPS=186, BW=746KiB/s (763kB/s)(7476KiB/10028msec) 00:28:08.327 slat (usec): min=5, max=4033, avg=19.42, stdev=93.19 00:28:08.327 clat (msec): min=2, max=165, avg=85.65, stdev=31.99 00:28:08.327 lat (msec): min=2, max=165, avg=85.67, stdev=31.99 00:28:08.327 clat percentiles (msec): 00:28:08.327 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 61], 00:28:08.327 | 30.00th=[ 71], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 94], 00:28:08.327 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 132], 95.00th=[ 144], 00:28:08.327 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 161], 99.95th=[ 167], 00:28:08.327 | 99.99th=[ 167] 00:28:08.327 bw ( KiB/s): min= 504, max= 1590, per=4.45%, avg=742.25, stdev=223.66, samples=20 00:28:08.327 iops : min= 126, max= 397, avg=185.50, stdev=55.82, samples=20 00:28:08.327 lat (msec) : 4=0.75%, 10=1.71%, 20=0.21%, 50=9.74%, 100=62.71% 00:28:08.327 lat (msec) : 250=24.88% 00:28:08.327 cpu : usr=33.88%, sys=2.17%, ctx=966, majf=0, minf=1075 00:28:08.327 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:08.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.327 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.327 issued rwts: total=1869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.327 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.327 filename0: (groupid=0, jobs=1): err= 0: pid=91543: Fri Dec 13 09:30:00 2024 00:28:08.327 read: IOPS=174, BW=696KiB/s (713kB/s)(6976KiB/10019msec) 00:28:08.327 slat (usec): min=5, max=7033, avg=27.53, stdev=236.81 00:28:08.327 clat (usec): min=1168, max=191625, avg=91629.02, stdev=44535.15 00:28:08.327 lat (usec): min=1181, max=191667, avg=91656.55, stdev=44541.89 00:28:08.327 clat percentiles (msec): 00:28:08.327 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 8], 20.00th=[ 65], 00:28:08.327 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 96], 60.00th=[ 105], 00:28:08.327 | 70.00th=[ 121], 80.00th=[ 133], 90.00th=[ 140], 95.00th=[ 144], 00:28:08.327 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:28:08.327 | 99.99th=[ 192] 00:28:08.327 bw ( KiB/s): min= 400, max= 2928, per=4.17%, avg=696.80, stdev=533.61, samples=20 00:28:08.327 iops : min= 100, max= 732, avg=174.20, stdev=133.40, samples=20 00:28:08.327 lat (msec) : 2=0.11%, 4=5.39%, 10=7.22%, 20=1.95%, 50=3.67% 00:28:08.327 lat (msec) : 100=36.81%, 250=44.84% 00:28:08.327 cpu : usr=43.29%, sys=3.16%, ctx=1712, majf=0, minf=1075 00:28:08.327 IO depths : 1=0.7%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:28:08.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.327 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.327 issued rwts: total=1744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.327 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.327 filename0: (groupid=0, jobs=1): err= 0: pid=91544: Fri Dec 13 09:30:00 2024 00:28:08.327 read: IOPS=180, BW=724KiB/s (741kB/s)(7252KiB/10019msec) 00:28:08.327 slat (usec): min=5, max=9037, avg=45.71, stdev=432.63 00:28:08.327 clat (msec): min=20, max=164, avg=88.20, stdev=28.93 00:28:08.327 lat (msec): min=20, max=164, avg=88.24, stdev=28.93 00:28:08.327 clat percentiles (msec): 00:28:08.327 | 1.00th=[ 29], 5.00th=[ 44], 10.00th=[ 54], 20.00th=[ 63], 00:28:08.327 | 30.00th=[ 69], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 92], 00:28:08.327 | 70.00th=[ 100], 80.00th=[ 113], 90.00th=[ 136], 95.00th=[ 142], 00:28:08.327 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 165], 00:28:08.327 | 99.99th=[ 165] 00:28:08.327 bw ( KiB/s): min= 512, max= 912, per=4.24%, avg=708.53, stdev=117.68, samples=19 00:28:08.327 iops : min= 128, max= 228, avg=177.05, stdev=29.43, samples=19 00:28:08.327 lat (msec) : 50=8.05%, 100=64.26%, 250=27.69% 00:28:08.327 cpu : usr=40.11%, sys=2.59%, ctx=1439, majf=0, minf=1071 00:28:08.327 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=81.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:28:08.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.327 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.327 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.327 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.327 filename0: (groupid=0, jobs=1): err= 0: pid=91545: Fri Dec 13 09:30:00 2024 00:28:08.327 read: IOPS=177, BW=709KiB/s (726kB/s)(7120KiB/10043msec) 00:28:08.327 slat (usec): min=5, max=8033, avg=27.07, stdev=285.07 00:28:08.327 clat (msec): min=23, max=167, avg=89.96, stdev=28.37 00:28:08.327 lat (msec): min=23, max=167, avg=89.99, stdev=28.36 00:28:08.327 clat percentiles (msec): 00:28:08.327 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 63], 00:28:08.327 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 96], 00:28:08.327 | 70.00th=[ 96], 80.00th=[ 110], 90.00th=[ 133], 95.00th=[ 142], 00:28:08.327 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 169], 00:28:08.327 | 99.99th=[ 169] 00:28:08.327 bw ( KiB/s): min= 504, max= 937, per=4.24%, avg=707.70, stdev=125.93, samples=20 00:28:08.327 iops : min= 126, max= 234, avg=176.90, stdev=31.45, samples=20 00:28:08.327 lat (msec) : 50=9.33%, 100=64.89%, 250=25.79% 00:28:08.327 cpu : usr=31.24%, sys=2.15%, ctx=853, majf=0, minf=1073 00:28:08.328 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:28:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.328 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.328 filename0: (groupid=0, jobs=1): err= 0: pid=91546: Fri Dec 13 09:30:00 2024 00:28:08.328 read: IOPS=149, BW=600KiB/s (614kB/s)(6016KiB/10032msec) 00:28:08.328 slat (usec): min=5, max=8042, avg=29.73, stdev=273.79 00:28:08.328 clat (msec): min=14, max=220, avg=106.49, stdev=29.37 00:28:08.328 lat (msec): min=14, max=220, avg=106.52, stdev=29.37 00:28:08.328 clat percentiles (msec): 00:28:08.328 | 1.00th=[ 24], 5.00th=[ 50], 10.00th=[ 81], 20.00th=[ 88], 00:28:08.328 | 30.00th=[ 90], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 116], 00:28:08.328 | 70.00th=[ 126], 80.00th=[ 132], 90.00th=[ 142], 95.00th=[ 155], 00:28:08.328 | 99.00th=[ 184], 99.50th=[ 184], 99.90th=[ 222], 99.95th=[ 222], 00:28:08.328 | 99.99th=[ 222] 00:28:08.328 bw ( KiB/s): min= 496, max= 766, per=3.56%, avg=595.80, stdev=94.90, samples=20 00:28:08.328 iops : min= 124, max= 191, avg=148.90, stdev=23.63, samples=20 00:28:08.328 lat (msec) : 20=0.13%, 50=5.32%, 100=43.95%, 250=50.60% 00:28:08.328 cpu : usr=39.14%, sys=2.19%, ctx=1178, majf=0, minf=1071 00:28:08.328 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 issued rwts: total=1504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.328 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.328 filename0: (groupid=0, jobs=1): err= 0: pid=91547: Fri Dec 13 09:30:00 2024 00:28:08.328 read: IOPS=165, BW=663KiB/s (679kB/s)(6652KiB/10035msec) 00:28:08.328 slat (usec): min=4, max=8039, avg=28.37, stdev=261.48 00:28:08.328 clat (msec): min=2, max=210, avg=96.19, stdev=43.17 00:28:08.328 lat (msec): min=2, max=210, avg=96.22, stdev=43.17 00:28:08.328 clat percentiles (msec): 00:28:08.328 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 24], 20.00th=[ 72], 00:28:08.328 | 30.00th=[ 85], 40.00th=[ 91], 50.00th=[ 95], 60.00th=[ 97], 00:28:08.328 | 70.00th=[ 118], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 176], 00:28:08.328 | 99.00th=[ 192], 99.50th=[ 192], 99.90th=[ 211], 99.95th=[ 211], 00:28:08.328 | 99.99th=[ 211] 00:28:08.328 bw ( KiB/s): min= 384, max= 2043, per=3.95%, avg=660.95, stdev=354.26, samples=20 00:28:08.328 iops : min= 96, max= 510, avg=165.20, stdev=88.41, samples=20 00:28:08.328 lat (msec) : 4=1.32%, 10=3.49%, 20=3.73%, 50=4.15%, 100=48.59% 00:28:08.328 lat (msec) : 250=38.73% 00:28:08.328 cpu : usr=35.70%, sys=2.27%, ctx=1082, majf=0, minf=1073 00:28:08.328 IO depths : 1=0.4%, 2=5.5%, 4=20.6%, 8=60.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:28:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 complete : 0=0.0%, 4=93.1%, 8=2.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 issued rwts: total=1663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.328 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.328 filename0: (groupid=0, jobs=1): err= 0: pid=91548: Fri Dec 13 09:30:00 2024 00:28:08.328 read: IOPS=183, BW=735KiB/s (752kB/s)(7380KiB/10043msec) 00:28:08.328 slat (usec): min=5, max=8043, avg=35.30, stdev=373.11 00:28:08.328 clat (msec): min=22, max=158, avg=86.80, stdev=28.66 00:28:08.328 lat (msec): min=22, max=158, avg=86.83, stdev=28.65 00:28:08.328 clat percentiles (msec): 00:28:08.328 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 61], 00:28:08.328 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 93], 00:28:08.328 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 132], 95.00th=[ 142], 00:28:08.328 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:28:08.328 | 99.99th=[ 159] 00:28:08.328 bw ( KiB/s): min= 512, max= 1016, per=4.40%, avg=734.05, stdev=136.45, samples=20 00:28:08.328 iops : min= 128, max= 254, avg=183.50, stdev=34.11, samples=20 00:28:08.328 lat (msec) : 50=10.95%, 100=65.20%, 250=23.85% 00:28:08.328 cpu : usr=33.41%, sys=2.38%, ctx=993, majf=0, minf=1072 00:28:08.328 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 issued rwts: total=1845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.328 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.328 filename0: (groupid=0, jobs=1): err= 0: pid=91549: Fri Dec 13 09:30:00 2024 00:28:08.328 read: IOPS=172, BW=689KiB/s (705kB/s)(6892KiB/10010msec) 00:28:08.328 slat (usec): min=4, max=8034, avg=36.67, stdev=385.91 00:28:08.328 clat (msec): min=8, max=179, avg=92.74, stdev=31.50 00:28:08.328 lat (msec): min=8, max=179, avg=92.77, stdev=31.51 00:28:08.328 clat percentiles (msec): 00:28:08.328 | 1.00th=[ 16], 5.00th=[ 39], 10.00th=[ 58], 20.00th=[ 65], 00:28:08.328 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 94], 60.00th=[ 96], 00:28:08.328 | 70.00th=[ 108], 80.00th=[ 124], 90.00th=[ 136], 95.00th=[ 144], 00:28:08.328 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 180], 00:28:08.328 | 99.99th=[ 180] 00:28:08.328 bw ( KiB/s): min= 400, max= 824, per=3.92%, avg=655.58, stdev=116.31, samples=19 00:28:08.328 iops : min= 100, max= 206, avg=163.89, stdev=29.08, samples=19 00:28:08.328 lat (msec) : 10=0.17%, 20=1.63%, 50=5.75%, 100=57.52%, 250=34.94% 00:28:08.328 cpu : usr=33.41%, sys=2.46%, ctx=938, majf=0, minf=1074 00:28:08.328 IO depths : 1=0.1%, 2=2.4%, 4=9.5%, 8=73.6%, 16=14.5%, 32=0.0%, >=64=0.0% 00:28:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 complete : 0=0.0%, 4=89.5%, 8=8.4%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 issued rwts: total=1723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.328 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.328 filename1: (groupid=0, jobs=1): err= 0: pid=91550: Fri Dec 13 09:30:00 2024 00:28:08.328 read: IOPS=169, BW=678KiB/s (694kB/s)(6796KiB/10022msec) 00:28:08.328 slat (usec): min=5, max=6149, avg=25.60, stdev=195.49 00:28:08.328 clat (msec): min=22, max=190, avg=94.16, stdev=31.47 00:28:08.328 lat (msec): min=22, max=190, avg=94.19, stdev=31.47 00:28:08.328 clat percentiles (msec): 00:28:08.328 | 1.00th=[ 31], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 65], 00:28:08.328 | 30.00th=[ 81], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 96], 00:28:08.328 | 70.00th=[ 105], 80.00th=[ 129], 90.00th=[ 140], 95.00th=[ 144], 00:28:08.328 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 192], 99.95th=[ 192], 00:28:08.328 | 99.99th=[ 192] 00:28:08.328 bw ( KiB/s): min= 400, max= 912, per=3.95%, avg=660.42, stdev=133.84, samples=19 00:28:08.328 iops : min= 100, max= 228, avg=165.05, stdev=33.44, samples=19 00:28:08.328 lat (msec) : 50=7.12%, 100=58.56%, 250=34.31% 00:28:08.328 cpu : usr=40.74%, sys=2.47%, ctx=1345, majf=0, minf=1075 00:28:08.328 IO depths : 1=0.1%, 2=2.5%, 4=9.8%, 8=73.0%, 16=14.6%, 32=0.0%, >=64=0.0% 00:28:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 complete : 0=0.0%, 4=89.7%, 8=8.1%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 issued rwts: total=1699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.328 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.328 filename1: (groupid=0, jobs=1): err= 0: pid=91551: Fri Dec 13 09:30:00 2024 00:28:08.328 read: IOPS=154, BW=618KiB/s (633kB/s)(6200KiB/10031msec) 00:28:08.328 slat (usec): min=5, max=4032, avg=22.10, stdev=144.31 00:28:08.328 clat (msec): min=27, max=179, avg=103.37, stdev=29.08 00:28:08.328 lat (msec): min=27, max=179, avg=103.40, stdev=29.09 00:28:08.328 clat percentiles (msec): 00:28:08.328 | 1.00th=[ 30], 5.00th=[ 48], 10.00th=[ 72], 20.00th=[ 85], 00:28:08.328 | 30.00th=[ 88], 40.00th=[ 93], 50.00th=[ 97], 60.00th=[ 111], 00:28:08.328 | 70.00th=[ 124], 80.00th=[ 134], 90.00th=[ 142], 95.00th=[ 144], 00:28:08.328 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:28:08.328 | 99.99th=[ 180] 00:28:08.328 bw ( KiB/s): min= 400, max= 1008, per=3.62%, avg=605.47, stdev=140.12, samples=19 00:28:08.328 iops : min= 100, max= 252, avg=151.37, stdev=35.03, samples=19 00:28:08.328 lat (msec) : 50=6.06%, 100=48.58%, 250=45.35% 00:28:08.328 cpu : usr=42.70%, sys=2.68%, ctx=1154, majf=0, minf=1073 00:28:08.328 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.328 issued rwts: total=1550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.328 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.328 filename1: (groupid=0, jobs=1): err= 0: pid=91552: Fri Dec 13 09:30:00 2024 00:28:08.328 read: IOPS=164, BW=658KiB/s (673kB/s)(6576KiB/10001msec) 00:28:08.328 slat (usec): min=5, max=8036, avg=30.46, stdev=342.38 00:28:08.328 clat (usec): min=1298, max=180125, avg=97157.52, stdev=35840.46 00:28:08.328 lat (usec): min=1306, max=180144, avg=97187.98, stdev=35841.31 00:28:08.328 clat percentiles (usec): 00:28:08.328 | 1.00th=[ 1614], 5.00th=[ 14877], 10.00th=[ 47973], 20.00th=[ 82314], 00:28:08.328 | 30.00th=[ 85459], 40.00th=[ 93848], 50.00th=[ 95945], 60.00th=[106431], 00:28:08.328 | 70.00th=[120062], 80.00th=[131597], 90.00th=[143655], 95.00th=[143655], 00:28:08.328 | 99.00th=[143655], 99.50th=[156238], 99.90th=[179307], 99.95th=[179307], 00:28:08.328 | 99.99th=[179307] 00:28:08.328 bw ( KiB/s): min= 400, max= 808, per=3.56%, avg=595.79, stdev=104.12, samples=19 00:28:08.328 iops : min= 100, max= 202, avg=148.95, stdev=26.03, samples=19 00:28:08.328 lat (msec) : 2=1.70%, 4=0.12%, 10=1.28%, 20=2.92%, 50=4.68% 00:28:08.328 lat (msec) : 100=47.32%, 250=41.97% 00:28:08.328 cpu : usr=31.28%, sys=2.14%, ctx=849, majf=0, minf=1073 00:28:08.328 IO depths : 1=0.1%, 2=4.4%, 4=17.9%, 8=64.1%, 16=13.6%, 32=0.0%, >=64=0.0% 00:28:08.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 complete : 0=0.0%, 4=92.3%, 8=3.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 issued rwts: total=1644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.329 filename1: (groupid=0, jobs=1): err= 0: pid=91553: Fri Dec 13 09:30:00 2024 00:28:08.329 read: IOPS=177, BW=708KiB/s (725kB/s)(7108KiB/10037msec) 00:28:08.329 slat (usec): min=4, max=8035, avg=28.79, stdev=285.45 00:28:08.329 clat (msec): min=25, max=165, avg=90.10, stdev=27.21 00:28:08.329 lat (msec): min=25, max=165, avg=90.12, stdev=27.22 00:28:08.329 clat percentiles (msec): 00:28:08.329 | 1.00th=[ 28], 5.00th=[ 47], 10.00th=[ 59], 20.00th=[ 67], 00:28:08.329 | 30.00th=[ 78], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 93], 00:28:08.329 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 133], 95.00th=[ 142], 00:28:08.329 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 165], 00:28:08.329 | 99.99th=[ 165] 00:28:08.329 bw ( KiB/s): min= 504, max= 894, per=4.24%, avg=707.10, stdev=107.38, samples=20 00:28:08.329 iops : min= 126, max= 223, avg=176.75, stdev=26.80, samples=20 00:28:08.329 lat (msec) : 50=6.53%, 100=65.11%, 250=28.36% 00:28:08.329 cpu : usr=40.53%, sys=2.69%, ctx=1579, majf=0, minf=1075 00:28:08.329 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=78.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:08.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 issued rwts: total=1777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.329 filename1: (groupid=0, jobs=1): err= 0: pid=91554: Fri Dec 13 09:30:00 2024 00:28:08.329 read: IOPS=179, BW=717KiB/s (734kB/s)(7176KiB/10014msec) 00:28:08.329 slat (usec): min=5, max=8036, avg=30.99, stdev=283.17 00:28:08.329 clat (msec): min=14, max=194, avg=89.13, stdev=30.17 00:28:08.329 lat (msec): min=14, max=194, avg=89.16, stdev=30.16 00:28:08.329 clat percentiles (msec): 00:28:08.329 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 53], 20.00th=[ 64], 00:28:08.329 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 95], 00:28:08.329 | 70.00th=[ 100], 80.00th=[ 111], 90.00th=[ 136], 95.00th=[ 142], 00:28:08.329 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 194], 99.95th=[ 194], 00:28:08.329 | 99.99th=[ 194] 00:28:08.329 bw ( KiB/s): min= 384, max= 792, per=4.11%, avg=686.32, stdev=110.87, samples=19 00:28:08.329 iops : min= 96, max= 198, avg=171.58, stdev=27.72, samples=19 00:28:08.329 lat (msec) : 20=1.28%, 50=6.80%, 100=62.88%, 250=29.04% 00:28:08.329 cpu : usr=36.13%, sys=2.30%, ctx=1161, majf=0, minf=1072 00:28:08.329 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=77.0%, 16=14.9%, 32=0.0%, >=64=0.0% 00:28:08.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 complete : 0=0.0%, 4=88.6%, 8=10.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 issued rwts: total=1794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.329 filename1: (groupid=0, jobs=1): err= 0: pid=91555: Fri Dec 13 09:30:00 2024 00:28:08.329 read: IOPS=182, BW=730KiB/s (748kB/s)(7332KiB/10037msec) 00:28:08.329 slat (usec): min=5, max=8036, avg=32.35, stdev=311.11 00:28:08.329 clat (msec): min=23, max=160, avg=87.35, stdev=28.35 00:28:08.329 lat (msec): min=23, max=160, avg=87.38, stdev=28.34 00:28:08.329 clat percentiles (msec): 00:28:08.329 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 51], 20.00th=[ 62], 00:28:08.329 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 93], 00:28:08.329 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 133], 95.00th=[ 142], 00:28:08.329 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 161], 99.95th=[ 161], 00:28:08.329 | 99.99th=[ 161] 00:28:08.329 bw ( KiB/s): min= 512, max= 928, per=4.37%, avg=729.10, stdev=122.46, samples=20 00:28:08.329 iops : min= 128, max= 232, avg=182.25, stdev=30.58, samples=20 00:28:08.329 lat (msec) : 50=9.60%, 100=66.07%, 250=24.33% 00:28:08.329 cpu : usr=34.62%, sys=1.91%, ctx=1043, majf=0, minf=1074 00:28:08.329 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=81.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:28:08.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 issued rwts: total=1833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.329 filename1: (groupid=0, jobs=1): err= 0: pid=91556: Fri Dec 13 09:30:00 2024 00:28:08.329 read: IOPS=191, BW=764KiB/s (783kB/s)(7644KiB/10003msec) 00:28:08.329 slat (usec): min=4, max=9033, avg=30.67, stdev=304.87 00:28:08.329 clat (msec): min=4, max=159, avg=83.62, stdev=32.24 00:28:08.329 lat (msec): min=4, max=159, avg=83.65, stdev=32.25 00:28:08.329 clat percentiles (msec): 00:28:08.329 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 48], 20.00th=[ 61], 00:28:08.329 | 30.00th=[ 66], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 93], 00:28:08.329 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 132], 95.00th=[ 142], 00:28:08.329 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 161], 99.95th=[ 161], 00:28:08.329 | 99.99th=[ 161] 00:28:08.329 bw ( KiB/s): min= 512, max= 896, per=4.30%, avg=717.47, stdev=107.47, samples=19 00:28:08.329 iops : min= 128, max= 224, avg=179.37, stdev=26.87, samples=19 00:28:08.329 lat (msec) : 10=1.52%, 20=2.67%, 50=9.94%, 100=62.85%, 250=23.02% 00:28:08.329 cpu : usr=33.50%, sys=2.21%, ctx=1006, majf=0, minf=1072 00:28:08.329 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:08.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 issued rwts: total=1911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.329 filename1: (groupid=0, jobs=1): err= 0: pid=91557: Fri Dec 13 09:30:00 2024 00:28:08.329 read: IOPS=177, BW=709KiB/s (726kB/s)(7092KiB/10005msec) 00:28:08.329 slat (usec): min=5, max=8039, avg=36.32, stdev=354.68 00:28:08.329 clat (msec): min=7, max=187, avg=90.10, stdev=31.74 00:28:08.329 lat (msec): min=7, max=187, avg=90.14, stdev=31.74 00:28:08.329 clat percentiles (msec): 00:28:08.329 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 55], 20.00th=[ 65], 00:28:08.329 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 89], 60.00th=[ 94], 00:28:08.329 | 70.00th=[ 99], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 144], 00:28:08.329 | 99.00th=[ 163], 99.50th=[ 178], 99.90th=[ 188], 99.95th=[ 188], 00:28:08.329 | 99.99th=[ 188] 00:28:08.329 bw ( KiB/s): min= 512, max= 824, per=4.03%, avg=672.74, stdev=108.74, samples=19 00:28:08.329 iops : min= 128, max= 206, avg=168.16, stdev=27.20, samples=19 00:28:08.329 lat (msec) : 10=0.85%, 20=1.58%, 50=6.49%, 100=61.93%, 250=29.16% 00:28:08.329 cpu : usr=39.03%, sys=2.49%, ctx=1193, majf=0, minf=1071 00:28:08.329 IO depths : 1=0.1%, 2=2.4%, 4=9.4%, 8=73.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:28:08.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 complete : 0=0.0%, 4=89.5%, 8=8.5%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 issued rwts: total=1773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.329 filename2: (groupid=0, jobs=1): err= 0: pid=91558: Fri Dec 13 09:30:00 2024 00:28:08.329 read: IOPS=183, BW=735KiB/s (753kB/s)(7360KiB/10014msec) 00:28:08.329 slat (usec): min=5, max=8034, avg=34.65, stdev=373.51 00:28:08.329 clat (msec): min=14, max=165, avg=86.91, stdev=28.63 00:28:08.329 lat (msec): min=14, max=165, avg=86.95, stdev=28.62 00:28:08.329 clat percentiles (msec): 00:28:08.329 | 1.00th=[ 22], 5.00th=[ 44], 10.00th=[ 57], 20.00th=[ 61], 00:28:08.329 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 95], 00:28:08.329 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 132], 95.00th=[ 142], 00:28:08.329 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 165], 99.95th=[ 165], 00:28:08.329 | 99.99th=[ 165] 00:28:08.329 bw ( KiB/s): min= 512, max= 856, per=4.25%, avg=710.74, stdev=103.94, samples=19 00:28:08.329 iops : min= 128, max= 214, avg=177.68, stdev=25.99, samples=19 00:28:08.329 lat (msec) : 20=0.82%, 50=8.32%, 100=68.10%, 250=22.77% 00:28:08.329 cpu : usr=31.41%, sys=1.87%, ctx=844, majf=0, minf=1073 00:28:08.329 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:28:08.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.329 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.329 filename2: (groupid=0, jobs=1): err= 0: pid=91559: Fri Dec 13 09:30:00 2024 00:28:08.329 read: IOPS=171, BW=684KiB/s (700kB/s)(6844KiB/10005msec) 00:28:08.329 slat (usec): min=5, max=9171, avg=25.24, stdev=241.70 00:28:08.329 clat (msec): min=7, max=200, avg=93.44, stdev=33.45 00:28:08.329 lat (msec): min=7, max=200, avg=93.46, stdev=33.46 00:28:08.329 clat percentiles (msec): 00:28:08.329 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 56], 20.00th=[ 69], 00:28:08.329 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 96], 00:28:08.329 | 70.00th=[ 106], 80.00th=[ 129], 90.00th=[ 138], 95.00th=[ 144], 00:28:08.329 | 99.00th=[ 176], 99.50th=[ 176], 99.90th=[ 201], 99.95th=[ 201], 00:28:08.329 | 99.99th=[ 201] 00:28:08.329 bw ( KiB/s): min= 400, max= 816, per=3.85%, avg=642.84, stdev=115.83, samples=19 00:28:08.329 iops : min= 100, max= 204, avg=160.68, stdev=28.96, samples=19 00:28:08.329 lat (msec) : 10=0.99%, 20=2.05%, 50=6.08%, 100=59.26%, 250=31.62% 00:28:08.329 cpu : usr=37.04%, sys=2.28%, ctx=1094, majf=0, minf=1074 00:28:08.329 IO depths : 1=0.1%, 2=3.2%, 4=12.6%, 8=70.2%, 16=14.0%, 32=0.0%, >=64=0.0% 00:28:08.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.329 complete : 0=0.0%, 4=90.4%, 8=6.8%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.330 filename2: (groupid=0, jobs=1): err= 0: pid=91560: Fri Dec 13 09:30:00 2024 00:28:08.330 read: IOPS=168, BW=675KiB/s (691kB/s)(6760KiB/10015msec) 00:28:08.330 slat (usec): min=5, max=8041, avg=52.18, stdev=524.26 00:28:08.330 clat (msec): min=22, max=214, avg=94.35, stdev=29.46 00:28:08.330 lat (msec): min=22, max=214, avg=94.40, stdev=29.47 00:28:08.330 clat percentiles (msec): 00:28:08.330 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 71], 00:28:08.330 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 96], 00:28:08.330 | 70.00th=[ 108], 80.00th=[ 125], 90.00th=[ 134], 95.00th=[ 144], 00:28:08.330 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 215], 99.95th=[ 215], 00:28:08.330 | 99.99th=[ 215] 00:28:08.330 bw ( KiB/s): min= 384, max= 840, per=3.92%, avg=654.79, stdev=131.78, samples=19 00:28:08.330 iops : min= 96, max= 210, avg=163.68, stdev=32.93, samples=19 00:28:08.330 lat (msec) : 50=6.75%, 100=58.28%, 250=34.97% 00:28:08.330 cpu : usr=35.33%, sys=2.32%, ctx=922, majf=0, minf=1074 00:28:08.330 IO depths : 1=0.1%, 2=2.8%, 4=11.3%, 8=71.4%, 16=14.4%, 32=0.0%, >=64=0.0% 00:28:08.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 complete : 0=0.0%, 4=90.2%, 8=7.3%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 issued rwts: total=1690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.330 filename2: (groupid=0, jobs=1): err= 0: pid=91561: Fri Dec 13 09:30:00 2024 00:28:08.330 read: IOPS=151, BW=608KiB/s (622kB/s)(6084KiB/10011msec) 00:28:08.330 slat (usec): min=5, max=11825, avg=32.30, stdev=380.09 00:28:08.330 clat (msec): min=15, max=203, avg=105.02, stdev=32.15 00:28:08.330 lat (msec): min=15, max=204, avg=105.06, stdev=32.15 00:28:08.330 clat percentiles (msec): 00:28:08.330 | 1.00th=[ 21], 5.00th=[ 44], 10.00th=[ 74], 20.00th=[ 85], 00:28:08.330 | 30.00th=[ 90], 40.00th=[ 94], 50.00th=[ 99], 60.00th=[ 118], 00:28:08.330 | 70.00th=[ 125], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 144], 00:28:08.330 | 99.00th=[ 192], 99.50th=[ 192], 99.90th=[ 205], 99.95th=[ 205], 00:28:08.330 | 99.99th=[ 205] 00:28:08.330 bw ( KiB/s): min= 384, max= 768, per=3.47%, avg=579.21, stdev=116.22, samples=19 00:28:08.330 iops : min= 96, max= 192, avg=144.79, stdev=29.05, samples=19 00:28:08.330 lat (msec) : 20=0.79%, 50=5.92%, 100=47.01%, 250=46.29% 00:28:08.330 cpu : usr=36.56%, sys=2.25%, ctx=1066, majf=0, minf=1074 00:28:08.330 IO depths : 1=0.1%, 2=5.9%, 4=23.3%, 8=58.0%, 16=12.8%, 32=0.0%, >=64=0.0% 00:28:08.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 complete : 0=0.0%, 4=93.9%, 8=0.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 issued rwts: total=1521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.330 filename2: (groupid=0, jobs=1): err= 0: pid=91562: Fri Dec 13 09:30:00 2024 00:28:08.330 read: IOPS=173, BW=693KiB/s (710kB/s)(6948KiB/10024msec) 00:28:08.330 slat (usec): min=5, max=8038, avg=41.97, stdev=417.25 00:28:08.330 clat (msec): min=24, max=215, avg=92.11, stdev=30.03 00:28:08.330 lat (msec): min=24, max=215, avg=92.15, stdev=30.02 00:28:08.330 clat percentiles (msec): 00:28:08.330 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 68], 00:28:08.330 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 89], 60.00th=[ 96], 00:28:08.330 | 70.00th=[ 100], 80.00th=[ 118], 90.00th=[ 136], 95.00th=[ 144], 00:28:08.330 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 215], 99.95th=[ 215], 00:28:08.330 | 99.99th=[ 215] 00:28:08.330 bw ( KiB/s): min= 496, max= 896, per=4.13%, avg=690.55, stdev=125.57, samples=20 00:28:08.330 iops : min= 124, max= 224, avg=172.60, stdev=31.37, samples=20 00:28:08.330 lat (msec) : 50=6.22%, 100=65.63%, 250=28.15% 00:28:08.330 cpu : usr=35.35%, sys=2.26%, ctx=1132, majf=0, minf=1074 00:28:08.330 IO depths : 1=0.1%, 2=2.1%, 4=8.2%, 8=74.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:28:08.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 complete : 0=0.0%, 4=89.3%, 8=8.9%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.330 filename2: (groupid=0, jobs=1): err= 0: pid=91563: Fri Dec 13 09:30:00 2024 00:28:08.330 read: IOPS=179, BW=720KiB/s (737kB/s)(7224KiB/10037msec) 00:28:08.330 slat (usec): min=6, max=8034, avg=22.26, stdev=188.73 00:28:08.330 clat (msec): min=24, max=166, avg=88.75, stdev=28.41 00:28:08.330 lat (msec): min=24, max=166, avg=88.77, stdev=28.41 00:28:08.330 clat percentiles (msec): 00:28:08.330 | 1.00th=[ 27], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 63], 00:28:08.330 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 94], 00:28:08.330 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 132], 95.00th=[ 142], 00:28:08.330 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 159], 99.95th=[ 167], 00:28:08.330 | 99.99th=[ 167] 00:28:08.330 bw ( KiB/s): min= 536, max= 1136, per=4.30%, avg=717.60, stdev=138.03, samples=20 00:28:08.330 iops : min= 134, max= 284, avg=179.40, stdev=34.51, samples=20 00:28:08.330 lat (msec) : 50=9.69%, 100=62.57%, 250=27.74% 00:28:08.330 cpu : usr=37.98%, sys=2.19%, ctx=1348, majf=0, minf=1074 00:28:08.330 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:28:08.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 issued rwts: total=1806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.330 filename2: (groupid=0, jobs=1): err= 0: pid=91564: Fri Dec 13 09:30:00 2024 00:28:08.330 read: IOPS=187, BW=750KiB/s (768kB/s)(7508KiB/10006msec) 00:28:08.330 slat (usec): min=4, max=8052, avg=31.33, stdev=324.35 00:28:08.330 clat (msec): min=6, max=167, avg=85.14, stdev=31.47 00:28:08.330 lat (msec): min=6, max=167, avg=85.17, stdev=31.46 00:28:08.330 clat percentiles (msec): 00:28:08.330 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 61], 00:28:08.330 | 30.00th=[ 69], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 92], 00:28:08.330 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 132], 95.00th=[ 144], 00:28:08.330 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 167], 99.95th=[ 167], 00:28:08.330 | 99.99th=[ 167] 00:28:08.330 bw ( KiB/s): min= 502, max= 872, per=4.27%, avg=712.11, stdev=110.10, samples=19 00:28:08.330 iops : min= 125, max= 218, avg=178.00, stdev=27.58, samples=19 00:28:08.330 lat (msec) : 10=1.07%, 20=1.97%, 50=9.11%, 100=64.30%, 250=23.55% 00:28:08.330 cpu : usr=36.17%, sys=2.44%, ctx=1070, majf=0, minf=1074 00:28:08.330 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:08.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.330 filename2: (groupid=0, jobs=1): err= 0: pid=91565: Fri Dec 13 09:30:00 2024 00:28:08.330 read: IOPS=185, BW=742KiB/s (760kB/s)(7464KiB/10057msec) 00:28:08.330 slat (usec): min=5, max=5038, avg=26.54, stdev=198.57 00:28:08.330 clat (msec): min=2, max=167, avg=85.96, stdev=33.32 00:28:08.330 lat (msec): min=2, max=167, avg=85.99, stdev=33.33 00:28:08.330 clat percentiles (msec): 00:28:08.330 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 46], 20.00th=[ 62], 00:28:08.330 | 30.00th=[ 69], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 92], 00:28:08.330 | 70.00th=[ 99], 80.00th=[ 114], 90.00th=[ 136], 95.00th=[ 142], 00:28:08.330 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 161], 99.95th=[ 167], 00:28:08.330 | 99.99th=[ 167] 00:28:08.330 bw ( KiB/s): min= 512, max= 1536, per=4.43%, avg=740.00, stdev=219.34, samples=20 00:28:08.330 iops : min= 128, max= 384, avg=185.00, stdev=54.84, samples=20 00:28:08.330 lat (msec) : 4=0.86%, 10=0.86%, 20=3.32%, 50=7.29%, 100=59.54% 00:28:08.330 lat (msec) : 250=28.14% 00:28:08.330 cpu : usr=40.82%, sys=2.83%, ctx=1669, majf=0, minf=1073 00:28:08.330 IO depths : 1=0.4%, 2=1.4%, 4=4.2%, 8=78.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:28:08.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.330 issued rwts: total=1866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.330 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:08.330 00:28:08.330 Run status group 0 (all jobs): 00:28:08.330 READ: bw=16.3MiB/s (17.1MB/s), 600KiB/s-764KiB/s (614kB/s-783kB/s), io=164MiB (172MB), run=10001-10057msec 00:28:08.330 ----------------------------------------------------- 00:28:08.330 Suppressions used: 00:28:08.330 count bytes template 00:28:08.330 45 402 /usr/src/fio/parse.c 00:28:08.330 1 8 libtcmalloc_minimal.so 00:28:08.330 1 904 libcrypto.so 00:28:08.330 ----------------------------------------------------- 00:28:08.330 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:08.330 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.331 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.590 bdev_null0 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.590 [2024-12-13 09:30:02.237925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.590 bdev_null1 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.590 { 00:28:08.590 "params": { 00:28:08.590 "name": "Nvme$subsystem", 00:28:08.590 "trtype": "$TEST_TRANSPORT", 00:28:08.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.590 "adrfam": "ipv4", 00:28:08.590 "trsvcid": "$NVMF_PORT", 00:28:08.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.590 "hdgst": ${hdgst:-false}, 00:28:08.590 "ddgst": ${ddgst:-false} 00:28:08.590 }, 00:28:08.590 "method": "bdev_nvme_attach_controller" 00:28:08.590 } 00:28:08.590 EOF 00:28:08.590 )") 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.590 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.590 { 00:28:08.591 "params": { 00:28:08.591 "name": "Nvme$subsystem", 00:28:08.591 "trtype": "$TEST_TRANSPORT", 00:28:08.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.591 "adrfam": "ipv4", 00:28:08.591 "trsvcid": "$NVMF_PORT", 00:28:08.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.591 "hdgst": ${hdgst:-false}, 00:28:08.591 "ddgst": ${ddgst:-false} 00:28:08.591 }, 00:28:08.591 "method": "bdev_nvme_attach_controller" 00:28:08.591 } 00:28:08.591 EOF 00:28:08.591 )") 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:08.591 "params": { 00:28:08.591 "name": "Nvme0", 00:28:08.591 "trtype": "tcp", 00:28:08.591 "traddr": "10.0.0.3", 00:28:08.591 "adrfam": "ipv4", 00:28:08.591 "trsvcid": "4420", 00:28:08.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:08.591 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:08.591 "hdgst": false, 00:28:08.591 "ddgst": false 00:28:08.591 }, 00:28:08.591 "method": "bdev_nvme_attach_controller" 00:28:08.591 },{ 00:28:08.591 "params": { 00:28:08.591 "name": "Nvme1", 00:28:08.591 "trtype": "tcp", 00:28:08.591 "traddr": "10.0.0.3", 00:28:08.591 "adrfam": "ipv4", 00:28:08.591 "trsvcid": "4420", 00:28:08.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:08.591 "hdgst": false, 00:28:08.591 "ddgst": false 00:28:08.591 }, 00:28:08.591 "method": "bdev_nvme_attach_controller" 00:28:08.591 }' 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:08.591 09:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.850 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:08.850 ... 00:28:08.850 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:08.850 ... 00:28:08.850 fio-3.35 00:28:08.850 Starting 4 threads 00:28:15.421 00:28:15.421 filename0: (groupid=0, jobs=1): err= 0: pid=91700: Fri Dec 13 09:30:08 2024 00:28:15.421 read: IOPS=1683, BW=13.2MiB/s (13.8MB/s)(65.8MiB/5003msec) 00:28:15.421 slat (nsec): min=5484, max=61714, avg=16910.56, stdev=5733.00 00:28:15.421 clat (usec): min=1440, max=7053, avg=4695.33, stdev=984.68 00:28:15.421 lat (usec): min=1465, max=7075, avg=4712.25, stdev=984.59 00:28:15.421 clat percentiles (usec): 00:28:15.421 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 3097], 20.00th=[ 3589], 00:28:15.421 | 30.00th=[ 4424], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:28:15.421 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5604], 95.00th=[ 5800], 00:28:15.421 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6652], 99.95th=[ 6718], 00:28:15.421 | 99.99th=[ 7046] 00:28:15.421 bw ( KiB/s): min=12016, max=15408, per=23.95%, avg=13623.11, stdev=1498.61, samples=9 00:28:15.421 iops : min= 1502, max= 1926, avg=1702.89, stdev=187.33, samples=9 00:28:15.421 lat (msec) : 2=0.05%, 4=22.70%, 10=77.25% 00:28:15.421 cpu : usr=92.54%, sys=6.56%, ctx=72, majf=0, minf=1074 00:28:15.421 IO depths : 1=0.1%, 2=14.5%, 4=55.8%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.421 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.421 issued rwts: total=8423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:15.421 filename0: (groupid=0, jobs=1): err= 0: pid=91701: Fri Dec 13 09:30:08 2024 00:28:15.421 read: IOPS=1691, BW=13.2MiB/s (13.9MB/s)(66.1MiB/5002msec) 00:28:15.421 slat (nsec): min=5441, max=70558, avg=17471.81, stdev=5839.41 00:28:15.421 clat (usec): min=1434, max=9534, avg=4672.05, stdev=995.72 00:28:15.421 lat (usec): min=1448, max=9556, avg=4689.52, stdev=994.78 00:28:15.421 clat percentiles (usec): 00:28:15.421 | 1.00th=[ 2311], 5.00th=[ 2442], 10.00th=[ 3064], 20.00th=[ 3556], 00:28:15.421 | 30.00th=[ 4424], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:28:15.421 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5604], 95.00th=[ 5800], 00:28:15.421 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 6718], 99.95th=[ 7504], 00:28:15.421 | 99.99th=[ 9503] 00:28:15.421 bw ( KiB/s): min=12016, max=15408, per=24.04%, avg=13679.67, stdev=1536.64, samples=9 00:28:15.421 iops : min= 1502, max= 1926, avg=1709.89, stdev=192.02, samples=9 00:28:15.421 lat (msec) : 2=0.14%, 4=23.22%, 10=76.64% 00:28:15.421 cpu : usr=92.42%, sys=6.68%, ctx=9, majf=0, minf=1074 00:28:15.421 IO depths : 1=0.1%, 2=14.2%, 4=56.0%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.421 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.421 issued rwts: total=8460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:15.421 filename1: (groupid=0, jobs=1): err= 0: pid=91702: Fri Dec 13 09:30:08 2024 00:28:15.421 read: IOPS=1768, BW=13.8MiB/s (14.5MB/s)(69.1MiB/5003msec) 00:28:15.421 slat (nsec): min=5742, max=61214, avg=15030.55, stdev=6166.54 00:28:15.421 clat (usec): min=1047, max=13838, avg=4475.92, stdev=1261.24 00:28:15.421 lat (usec): min=1057, max=13871, avg=4490.95, stdev=1260.85 00:28:15.421 clat percentiles (usec): 00:28:15.422 | 1.00th=[ 2409], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2868], 00:28:15.422 | 30.00th=[ 3458], 40.00th=[ 4490], 50.00th=[ 4883], 60.00th=[ 5080], 00:28:15.422 | 70.00th=[ 5407], 80.00th=[ 5604], 90.00th=[ 5800], 95.00th=[ 5932], 00:28:15.422 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 8979], 99.95th=[12649], 00:28:15.422 | 99.99th=[13829] 00:28:15.422 bw ( KiB/s): min=10816, max=16896, per=24.40%, avg=13884.44, stdev=2684.05, samples=9 00:28:15.422 iops : min= 1352, max= 2112, avg=1735.56, stdev=335.51, samples=9 00:28:15.422 lat (msec) : 2=0.24%, 4=31.11%, 10=68.56%, 20=0.09% 00:28:15.422 cpu : usr=92.12%, sys=6.86%, ctx=49, majf=0, minf=1074 00:28:15.422 IO depths : 1=0.1%, 2=10.4%, 4=58.0%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.422 complete : 0=0.0%, 4=96.0%, 8=4.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.422 issued rwts: total=8849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:15.422 filename1: (groupid=0, jobs=1): err= 0: pid=91703: Fri Dec 13 09:30:08 2024 00:28:15.422 read: IOPS=1968, BW=15.4MiB/s (16.1MB/s)(76.9MiB/5003msec) 00:28:15.422 slat (usec): min=5, max=156, avg=16.18, stdev= 6.47 00:28:15.422 clat (usec): min=818, max=10987, avg=4022.56, stdev=1200.15 00:28:15.422 lat (usec): min=827, max=11003, avg=4038.74, stdev=1199.04 00:28:15.422 clat percentiles (usec): 00:28:15.422 | 1.00th=[ 2245], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2704], 00:28:15.422 | 30.00th=[ 2999], 40.00th=[ 3425], 50.00th=[ 4293], 60.00th=[ 4621], 00:28:15.422 | 70.00th=[ 4948], 80.00th=[ 5145], 90.00th=[ 5538], 95.00th=[ 5669], 00:28:15.422 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 8225], 99.95th=[ 8455], 00:28:15.422 | 99.99th=[10945] 00:28:15.422 bw ( KiB/s): min=14320, max=16848, per=27.51%, avg=15649.78, stdev=1013.33, samples=9 00:28:15.422 iops : min= 1790, max= 2106, avg=1956.22, stdev=126.67, samples=9 00:28:15.422 lat (usec) : 1000=0.26% 00:28:15.422 lat (msec) : 2=0.49%, 4=47.04%, 10=52.16%, 20=0.04% 00:28:15.422 cpu : usr=91.74%, sys=6.88%, ctx=54, majf=0, minf=1073 00:28:15.422 IO depths : 1=0.1%, 2=2.7%, 4=62.2%, 8=35.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.422 complete : 0=0.0%, 4=99.0%, 8=1.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.422 issued rwts: total=9846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:15.422 00:28:15.422 Run status group 0 (all jobs): 00:28:15.422 READ: bw=55.6MiB/s (58.3MB/s), 13.2MiB/s-15.4MiB/s (13.8MB/s-16.1MB/s), io=278MiB (291MB), run=5002-5003msec 00:28:15.682 ----------------------------------------------------- 00:28:15.682 Suppressions used: 00:28:15.682 count bytes template 00:28:15.682 6 52 /usr/src/fio/parse.c 00:28:15.682 1 8 libtcmalloc_minimal.so 00:28:15.682 1 904 libcrypto.so 00:28:15.682 ----------------------------------------------------- 00:28:15.682 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.682 00:28:15.682 real 0m27.343s 00:28:15.682 user 2m6.963s 00:28:15.682 sys 0m9.290s 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.682 ************************************ 00:28:15.682 END TEST fio_dif_rand_params 00:28:15.682 ************************************ 00:28:15.682 09:30:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:15.682 09:30:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:15.682 09:30:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:15.682 09:30:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:15.682 09:30:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:15.682 ************************************ 00:28:15.682 START TEST fio_dif_digest 00:28:15.682 ************************************ 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.682 bdev_null0 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.682 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.941 [2024-12-13 09:30:09.584331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.941 { 00:28:15.941 "params": { 00:28:15.941 "name": "Nvme$subsystem", 00:28:15.941 "trtype": "$TEST_TRANSPORT", 00:28:15.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.941 "adrfam": "ipv4", 00:28:15.941 "trsvcid": "$NVMF_PORT", 00:28:15.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.941 "hdgst": ${hdgst:-false}, 00:28:15.941 "ddgst": ${ddgst:-false} 00:28:15.941 }, 00:28:15.941 "method": "bdev_nvme_attach_controller" 00:28:15.941 } 00:28:15.941 EOF 00:28:15.941 )") 00:28:15.941 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:15.942 "params": { 00:28:15.942 "name": "Nvme0", 00:28:15.942 "trtype": "tcp", 00:28:15.942 "traddr": "10.0.0.3", 00:28:15.942 "adrfam": "ipv4", 00:28:15.942 "trsvcid": "4420", 00:28:15.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:15.942 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:15.942 "hdgst": true, 00:28:15.942 "ddgst": true 00:28:15.942 }, 00:28:15.942 "method": "bdev_nvme_attach_controller" 00:28:15.942 }' 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:15.942 09:30:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.201 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:16.201 ... 00:28:16.201 fio-3.35 00:28:16.201 Starting 3 threads 00:28:28.455 00:28:28.455 filename0: (groupid=0, jobs=1): err= 0: pid=91810: Fri Dec 13 09:30:20 2024 00:28:28.455 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(249MiB/10006msec) 00:28:28.455 slat (nsec): min=5637, max=75862, avg=18292.06, stdev=6046.73 00:28:28.455 clat (usec): min=14359, max=17506, avg=15027.60, stdev=597.99 00:28:28.455 lat (usec): min=14373, max=17531, avg=15045.89, stdev=598.74 00:28:28.455 clat percentiles (usec): 00:28:28.455 | 1.00th=[14484], 5.00th=[14484], 10.00th=[14484], 20.00th=[14615], 00:28:28.455 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14746], 60.00th=[14877], 00:28:28.455 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15795], 95.00th=[16319], 00:28:28.455 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:28:28.455 | 99.99th=[17433] 00:28:28.455 bw ( KiB/s): min=24526, max=26112, per=33.37%, avg=25503.05, stdev=489.74, samples=19 00:28:28.455 iops : min= 191, max= 204, avg=199.21, stdev= 3.90, samples=19 00:28:28.455 lat (msec) : 20=100.00% 00:28:28.455 cpu : usr=92.34%, sys=7.14%, ctx=15, majf=0, minf=1073 00:28:28.455 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:28.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.455 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:28.455 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:28.455 filename0: (groupid=0, jobs=1): err= 0: pid=91811: Fri Dec 13 09:30:20 2024 00:28:28.455 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(249MiB/10010msec) 00:28:28.455 slat (nsec): min=8250, max=75622, avg=17865.11, stdev=6075.41 00:28:28.455 clat (usec): min=14370, max=20476, avg=15034.95, stdev=632.88 00:28:28.455 lat (usec): min=14386, max=20512, avg=15052.81, stdev=633.68 00:28:28.455 clat percentiles (usec): 00:28:28.455 | 1.00th=[14484], 5.00th=[14484], 10.00th=[14484], 20.00th=[14615], 00:28:28.455 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14746], 60.00th=[14877], 00:28:28.455 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15795], 95.00th=[16450], 00:28:28.455 | 99.00th=[17171], 99.50th=[17433], 99.90th=[20579], 99.95th=[20579], 00:28:28.455 | 99.99th=[20579] 00:28:28.455 bw ( KiB/s): min=24576, max=26112, per=33.38%, avg=25505.68, stdev=547.80, samples=19 00:28:28.455 iops : min= 192, max= 204, avg=199.26, stdev= 4.28, samples=19 00:28:28.455 lat (msec) : 20=99.85%, 50=0.15% 00:28:28.455 cpu : usr=92.81%, sys=6.65%, ctx=20, majf=0, minf=1075 00:28:28.455 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:28.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.455 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:28.455 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:28.455 filename0: (groupid=0, jobs=1): err= 0: pid=91812: Fri Dec 13 09:30:20 2024 00:28:28.455 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(249MiB/10010msec) 00:28:28.455 slat (usec): min=5, max=215, avg=18.23, stdev= 7.39 00:28:28.455 clat (usec): min=14362, max=20001, avg=15032.20, stdev=623.65 00:28:28.455 lat (usec): min=14389, max=20024, avg=15050.44, stdev=624.47 00:28:28.455 clat percentiles (usec): 00:28:28.455 | 1.00th=[14484], 5.00th=[14484], 10.00th=[14484], 20.00th=[14615], 00:28:28.455 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14746], 60.00th=[14877], 00:28:28.455 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15795], 95.00th=[16319], 00:28:28.455 | 99.00th=[17171], 99.50th=[17433], 99.90th=[20055], 99.95th=[20055], 00:28:28.455 | 99.99th=[20055] 00:28:28.456 bw ( KiB/s): min=24576, max=26112, per=33.38%, avg=25505.68, stdev=547.80, samples=19 00:28:28.456 iops : min= 192, max= 204, avg=199.26, stdev= 4.28, samples=19 00:28:28.456 lat (msec) : 20=99.95%, 50=0.05% 00:28:28.456 cpu : usr=92.63%, sys=6.81%, ctx=19, majf=0, minf=1074 00:28:28.456 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:28.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.456 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:28.456 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:28.456 00:28:28.456 Run status group 0 (all jobs): 00:28:28.456 READ: bw=74.6MiB/s (78.2MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=747MiB (783MB), run=10006-10010msec 00:28:28.456 ----------------------------------------------------- 00:28:28.456 Suppressions used: 00:28:28.456 count bytes template 00:28:28.456 5 44 /usr/src/fio/parse.c 00:28:28.456 1 8 libtcmalloc_minimal.so 00:28:28.456 1 904 libcrypto.so 00:28:28.456 ----------------------------------------------------- 00:28:28.456 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.456 00:28:28.456 real 0m12.267s 00:28:28.456 user 0m29.660s 00:28:28.456 sys 0m2.424s 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.456 ************************************ 00:28:28.456 END TEST fio_dif_digest 00:28:28.456 ************************************ 00:28:28.456 09:30:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:28.456 09:30:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:28.456 09:30:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:28.456 rmmod nvme_tcp 00:28:28.456 rmmod nvme_fabrics 00:28:28.456 rmmod nvme_keyring 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 91055 ']' 00:28:28.456 09:30:21 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 91055 00:28:28.456 09:30:21 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 91055 ']' 00:28:28.456 09:30:21 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 91055 00:28:28.456 09:30:21 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:28:28.456 09:30:21 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.456 09:30:21 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91055 00:28:28.456 09:30:21 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:28.456 killing process with pid 91055 00:28:28.456 09:30:21 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:28.456 09:30:21 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91055' 00:28:28.456 09:30:21 nvmf_dif -- common/autotest_common.sh@973 -- # kill 91055 00:28:28.456 09:30:21 nvmf_dif -- common/autotest_common.sh@978 -- # wait 91055 00:28:29.024 09:30:22 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:29.024 09:30:22 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:29.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:29.283 Waiting for block devices as requested 00:28:29.541 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:29.541 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:29.541 09:30:23 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.800 09:30:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:29.800 09:30:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.800 09:30:23 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:28:29.800 00:28:29.800 real 1m8.415s 00:28:29.800 user 4m4.594s 00:28:29.800 sys 0m19.812s 00:28:29.800 09:30:23 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.800 09:30:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:29.800 ************************************ 00:28:29.800 END TEST nvmf_dif 00:28:29.800 ************************************ 00:28:29.800 09:30:23 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:29.800 09:30:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:29.800 09:30:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.800 09:30:23 -- common/autotest_common.sh@10 -- # set +x 00:28:29.800 ************************************ 00:28:29.800 START TEST nvmf_abort_qd_sizes 00:28:29.800 ************************************ 00:28:29.800 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:30.060 * Looking for test storage... 00:28:30.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:30.060 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:30.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.061 --rc genhtml_branch_coverage=1 00:28:30.061 --rc genhtml_function_coverage=1 00:28:30.061 --rc genhtml_legend=1 00:28:30.061 --rc geninfo_all_blocks=1 00:28:30.061 --rc geninfo_unexecuted_blocks=1 00:28:30.061 00:28:30.061 ' 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:30.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.061 --rc genhtml_branch_coverage=1 00:28:30.061 --rc genhtml_function_coverage=1 00:28:30.061 --rc genhtml_legend=1 00:28:30.061 --rc geninfo_all_blocks=1 00:28:30.061 --rc geninfo_unexecuted_blocks=1 00:28:30.061 00:28:30.061 ' 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:30.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.061 --rc genhtml_branch_coverage=1 00:28:30.061 --rc genhtml_function_coverage=1 00:28:30.061 --rc genhtml_legend=1 00:28:30.061 --rc geninfo_all_blocks=1 00:28:30.061 --rc geninfo_unexecuted_blocks=1 00:28:30.061 00:28:30.061 ' 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:30.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.061 --rc genhtml_branch_coverage=1 00:28:30.061 --rc genhtml_function_coverage=1 00:28:30.061 --rc genhtml_legend=1 00:28:30.061 --rc geninfo_all_blocks=1 00:28:30.061 --rc geninfo_unexecuted_blocks=1 00:28:30.061 00:28:30.061 ' 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:30.061 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:30.061 Cannot find device "nvmf_init_br" 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:30.061 Cannot find device "nvmf_init_br2" 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:30.061 Cannot find device "nvmf_tgt_br" 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:28:30.061 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:30.321 Cannot find device "nvmf_tgt_br2" 00:28:30.321 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:28:30.321 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:30.321 Cannot find device "nvmf_init_br" 00:28:30.321 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:28:30.321 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:30.321 Cannot find device "nvmf_init_br2" 00:28:30.321 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:28:30.321 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:30.321 Cannot find device "nvmf_tgt_br" 00:28:30.321 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:28:30.321 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:30.321 Cannot find device "nvmf_tgt_br2" 00:28:30.321 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:28:30.321 09:30:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:30.321 Cannot find device "nvmf_br" 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:30.321 Cannot find device "nvmf_init_if" 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:30.321 Cannot find device "nvmf_init_if2" 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:30.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:30.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:30.321 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:30.580 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:30.580 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:28:30.580 00:28:30.580 --- 10.0.0.3 ping statistics --- 00:28:30.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.580 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:30.580 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:30.580 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:28:30.580 00:28:30.580 --- 10.0.0.4 ping statistics --- 00:28:30.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.580 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:30.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:28:30.580 00:28:30.580 --- 10.0.0.1 ping statistics --- 00:28:30.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.580 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:30.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:28:30.580 00:28:30.580 --- 10.0.0.2 ping statistics --- 00:28:30.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.580 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:28:30.580 09:30:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:31.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:31.408 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:31.408 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=92469 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 92469 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 92469 ']' 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.408 09:30:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:31.667 [2024-12-13 09:30:25.311602] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:31.667 [2024-12-13 09:30:25.311781] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.667 [2024-12-13 09:30:25.505766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:31.926 [2024-12-13 09:30:25.636939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.926 [2024-12-13 09:30:25.637009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.926 [2024-12-13 09:30:25.637034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.926 [2024-12-13 09:30:25.637050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.926 [2024-12-13 09:30:25.637066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.926 [2024-12-13 09:30:25.639259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.926 [2024-12-13 09:30:25.639319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.926 [2024-12-13 09:30:25.639435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.926 [2024-12-13 09:30:25.639449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.184 [2024-12-13 09:30:25.841551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:28:32.443 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:32.444 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:28:32.444 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:28:32.444 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:28:32.444 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:28:32.444 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:28:32.444 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.703 09:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:32.703 ************************************ 00:28:32.703 START TEST spdk_target_abort 00:28:32.703 ************************************ 00:28:32.703 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:28:32.703 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:32.703 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:32.703 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.703 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.703 spdk_targetn1 00:28:32.703 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.703 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:32.703 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.703 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.703 [2024-12-13 09:30:26.447489] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.704 [2024-12-13 09:30:26.491756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:32.704 09:30:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:35.990 Initializing NVMe Controllers 00:28:35.990 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:35.990 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:35.990 Initialization complete. Launching workers. 00:28:35.990 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8214, failed: 0 00:28:35.990 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1074, failed to submit 7140 00:28:35.990 success 823, unsuccessful 251, failed 0 00:28:35.990 09:30:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:35.990 09:30:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:40.179 Initializing NVMe Controllers 00:28:40.179 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:40.179 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:40.179 Initialization complete. Launching workers. 00:28:40.179 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8823, failed: 0 00:28:40.179 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1146, failed to submit 7677 00:28:40.179 success 385, unsuccessful 761, failed 0 00:28:40.179 09:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:40.179 09:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:43.466 Initializing NVMe Controllers 00:28:43.466 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:43.466 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:43.466 Initialization complete. Launching workers. 00:28:43.466 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27701, failed: 0 00:28:43.466 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2233, failed to submit 25468 00:28:43.466 success 341, unsuccessful 1892, failed 0 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 92469 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 92469 ']' 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 92469 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92469 00:28:43.466 killing process with pid 92469 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92469' 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 92469 00:28:43.466 09:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 92469 00:28:44.034 ************************************ 00:28:44.034 END TEST spdk_target_abort 00:28:44.034 ************************************ 00:28:44.034 00:28:44.034 real 0m11.377s 00:28:44.034 user 0m45.348s 00:28:44.034 sys 0m2.222s 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.034 09:30:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:44.034 09:30:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:44.034 09:30:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:44.034 09:30:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:44.034 ************************************ 00:28:44.034 START TEST kernel_target_abort 00:28:44.034 ************************************ 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:44.034 09:30:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:44.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:44.552 Waiting for block devices as requested 00:28:44.552 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:44.552 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:44.811 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:44.811 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:44.811 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:44.811 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:44.811 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:44.811 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:44.811 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:44.811 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:44.811 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:44.811 No valid GPT data, bailing 00:28:44.811 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:45.070 No valid GPT data, bailing 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:45.070 No valid GPT data, bailing 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:45.070 No valid GPT data, bailing 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:28:45.070 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a --hostid=5267ba90-6d03-4c73-b69a-15b62f92a67a -a 10.0.0.1 -t tcp -s 4420 00:28:45.330 00:28:45.330 Discovery Log Number of Records 2, Generation counter 2 00:28:45.330 =====Discovery Log Entry 0====== 00:28:45.330 trtype: tcp 00:28:45.330 adrfam: ipv4 00:28:45.330 subtype: current discovery subsystem 00:28:45.330 treq: not specified, sq flow control disable supported 00:28:45.330 portid: 1 00:28:45.330 trsvcid: 4420 00:28:45.330 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:45.330 traddr: 10.0.0.1 00:28:45.330 eflags: none 00:28:45.330 sectype: none 00:28:45.330 =====Discovery Log Entry 1====== 00:28:45.330 trtype: tcp 00:28:45.330 adrfam: ipv4 00:28:45.330 subtype: nvme subsystem 00:28:45.330 treq: not specified, sq flow control disable supported 00:28:45.330 portid: 1 00:28:45.330 trsvcid: 4420 00:28:45.330 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:45.330 traddr: 10.0.0.1 00:28:45.330 eflags: none 00:28:45.330 sectype: none 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:45.330 09:30:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:45.330 09:30:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:45.330 09:30:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:48.618 Initializing NVMe Controllers 00:28:48.618 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:48.618 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:48.618 Initialization complete. Launching workers. 00:28:48.618 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25436, failed: 0 00:28:48.618 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25436, failed to submit 0 00:28:48.618 success 0, unsuccessful 25436, failed 0 00:28:48.618 09:30:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:48.618 09:30:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:51.906 Initializing NVMe Controllers 00:28:51.906 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:51.906 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:51.906 Initialization complete. Launching workers. 00:28:51.906 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55635, failed: 0 00:28:51.906 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22955, failed to submit 32680 00:28:51.906 success 0, unsuccessful 22955, failed 0 00:28:51.906 09:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:51.906 09:30:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:55.262 Initializing NVMe Controllers 00:28:55.262 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:55.262 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:55.262 Initialization complete. Launching workers. 00:28:55.262 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60217, failed: 0 00:28:55.262 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15014, failed to submit 45203 00:28:55.262 success 0, unsuccessful 15014, failed 0 00:28:55.262 09:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:55.262 09:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:55.262 09:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:28:55.262 09:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:55.262 09:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:55.262 09:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:55.262 09:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:55.262 09:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:55.262 09:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:55.262 09:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:55.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:56.398 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:56.398 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:56.398 00:28:56.398 real 0m12.477s 00:28:56.398 user 0m6.427s 00:28:56.398 sys 0m3.734s 00:28:56.398 09:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.398 ************************************ 00:28:56.398 END TEST kernel_target_abort 00:28:56.398 ************************************ 00:28:56.398 09:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.657 rmmod nvme_tcp 00:28:56.657 rmmod nvme_fabrics 00:28:56.657 rmmod nvme_keyring 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 92469 ']' 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 92469 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 92469 ']' 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 92469 00:28:56.657 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92469) - No such process 00:28:56.657 Process with pid 92469 is not found 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 92469 is not found' 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:56.657 09:30:50 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:56.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:56.916 Waiting for block devices as requested 00:28:57.174 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:57.174 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:57.174 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:28:57.433 00:28:57.433 real 0m27.570s 00:28:57.433 user 0m53.159s 00:28:57.433 sys 0m7.405s 00:28:57.433 ************************************ 00:28:57.433 END TEST nvmf_abort_qd_sizes 00:28:57.433 ************************************ 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.433 09:30:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:57.433 09:30:51 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:57.433 09:30:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:57.433 09:30:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:57.433 09:30:51 -- common/autotest_common.sh@10 -- # set +x 00:28:57.433 ************************************ 00:28:57.433 START TEST keyring_file 00:28:57.433 ************************************ 00:28:57.433 09:30:51 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:57.693 * Looking for test storage... 00:28:57.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:57.693 09:30:51 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:57.693 09:30:51 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:28:57.693 09:30:51 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:57.693 09:30:51 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@345 -- # : 1 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@353 -- # local d=1 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@355 -- # echo 1 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@353 -- # local d=2 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@355 -- # echo 2 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@368 -- # return 0 00:28:57.693 09:30:51 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:57.693 09:30:51 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:57.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.693 --rc genhtml_branch_coverage=1 00:28:57.693 --rc genhtml_function_coverage=1 00:28:57.693 --rc genhtml_legend=1 00:28:57.693 --rc geninfo_all_blocks=1 00:28:57.693 --rc geninfo_unexecuted_blocks=1 00:28:57.693 00:28:57.693 ' 00:28:57.693 09:30:51 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:57.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.693 --rc genhtml_branch_coverage=1 00:28:57.693 --rc genhtml_function_coverage=1 00:28:57.693 --rc genhtml_legend=1 00:28:57.693 --rc geninfo_all_blocks=1 00:28:57.693 --rc geninfo_unexecuted_blocks=1 00:28:57.693 00:28:57.693 ' 00:28:57.693 09:30:51 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:57.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.693 --rc genhtml_branch_coverage=1 00:28:57.693 --rc genhtml_function_coverage=1 00:28:57.693 --rc genhtml_legend=1 00:28:57.693 --rc geninfo_all_blocks=1 00:28:57.693 --rc geninfo_unexecuted_blocks=1 00:28:57.693 00:28:57.693 ' 00:28:57.693 09:30:51 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:57.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:57.693 --rc genhtml_branch_coverage=1 00:28:57.693 --rc genhtml_function_coverage=1 00:28:57.693 --rc genhtml_legend=1 00:28:57.693 --rc geninfo_all_blocks=1 00:28:57.693 --rc geninfo_unexecuted_blocks=1 00:28:57.693 00:28:57.693 ' 00:28:57.693 09:30:51 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.693 09:30:51 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.693 09:30:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.693 09:30:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.693 09:30:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.693 09:30:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:57.693 09:30:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@51 -- # : 0 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:57.693 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:57.693 09:30:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:57.693 09:30:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:57.693 09:30:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:57.693 09:30:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:57.693 09:30:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:57.693 09:30:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S0R8EiAqKZ 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:57.693 09:30:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S0R8EiAqKZ 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S0R8EiAqKZ 00:28:57.693 09:30:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.S0R8EiAqKZ 00:28:57.693 09:30:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:57.693 09:30:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:57.694 09:30:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:57.694 09:30:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:57.694 09:30:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:57.694 09:30:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:57.953 09:30:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Czhsqs6LKZ 00:28:57.953 09:30:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:57.953 09:30:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:57.953 09:30:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:57.953 09:30:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:57.953 09:30:51 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:57.953 09:30:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:57.953 09:30:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:57.953 09:30:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Czhsqs6LKZ 00:28:57.953 09:30:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Czhsqs6LKZ 00:28:57.953 09:30:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Czhsqs6LKZ 00:28:57.953 09:30:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=93490 00:28:57.953 09:30:51 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:57.953 09:30:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 93490 00:28:57.953 09:30:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 93490 ']' 00:28:57.953 09:30:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.953 09:30:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.953 09:30:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.953 09:30:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.953 09:30:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:57.953 [2024-12-13 09:30:51.772088] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:57.953 [2024-12-13 09:30:51.772267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93490 ] 00:28:58.212 [2024-12-13 09:30:51.958425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.212 [2024-12-13 09:30:52.083962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.470 [2024-12-13 09:30:52.304964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:59.038 09:30:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.038 09:30:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:59.038 09:30:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:59.038 09:30:52 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.038 09:30:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:59.038 [2024-12-13 09:30:52.743878] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.038 null0 00:28:59.038 [2024-12-13 09:30:52.775862] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:59.038 [2024-12-13 09:30:52.776111] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:59.038 09:30:52 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.038 09:30:52 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:59.038 09:30:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:59.038 09:30:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:59.038 09:30:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:59.038 09:30:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.038 09:30:52 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:59.039 [2024-12-13 09:30:52.803852] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:59.039 request: 00:28:59.039 { 00:28:59.039 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.039 "secure_channel": false, 00:28:59.039 "listen_address": { 00:28:59.039 "trtype": "tcp", 00:28:59.039 "traddr": "127.0.0.1", 00:28:59.039 "trsvcid": "4420" 00:28:59.039 }, 00:28:59.039 "method": "nvmf_subsystem_add_listener", 00:28:59.039 "req_id": 1 00:28:59.039 } 00:28:59.039 Got JSON-RPC error response 00:28:59.039 response: 00:28:59.039 { 00:28:59.039 "code": -32602, 00:28:59.039 "message": "Invalid parameters" 00:28:59.039 } 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:59.039 09:30:52 keyring_file -- keyring/file.sh@47 -- # bperfpid=93503 00:28:59.039 09:30:52 keyring_file -- keyring/file.sh@49 -- # waitforlisten 93503 /var/tmp/bperf.sock 00:28:59.039 09:30:52 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 93503 ']' 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:59.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.039 09:30:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:59.298 [2024-12-13 09:30:52.927792] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:59.298 [2024-12-13 09:30:52.928192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93503 ] 00:28:59.298 [2024-12-13 09:30:53.101143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.557 [2024-12-13 09:30:53.187554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.557 [2024-12-13 09:30:53.339952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:00.123 09:30:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.123 09:30:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:00.123 09:30:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S0R8EiAqKZ 00:29:00.123 09:30:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S0R8EiAqKZ 00:29:00.382 09:30:54 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Czhsqs6LKZ 00:29:00.382 09:30:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Czhsqs6LKZ 00:29:00.640 09:30:54 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:29:00.640 09:30:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:00.640 09:30:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:00.640 09:30:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:00.640 09:30:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.898 09:30:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.S0R8EiAqKZ == \/\t\m\p\/\t\m\p\.\S\0\R\8\E\i\A\q\K\Z ]] 00:29:00.898 09:30:54 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:29:00.898 09:30:54 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:29:00.898 09:30:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:00.898 09:30:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:00.898 09:30:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.156 09:30:54 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Czhsqs6LKZ == \/\t\m\p\/\t\m\p\.\C\z\h\s\q\s\6\L\K\Z ]] 00:29:01.156 09:30:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:29:01.156 09:30:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:01.156 09:30:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:01.156 09:30:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.156 09:30:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.156 09:30:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:01.414 09:30:55 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:01.414 09:30:55 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:29:01.414 09:30:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:01.414 09:30:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:01.414 09:30:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.414 09:30:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:01.414 09:30:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.673 09:30:55 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:29:01.673 09:30:55 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:01.673 09:30:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:01.932 [2024-12-13 09:30:55.694804] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:01.932 nvme0n1 00:29:01.932 09:30:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:29:01.932 09:30:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:01.932 09:30:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:01.932 09:30:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:01.932 09:30:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.932 09:30:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.500 09:30:56 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:29:02.500 09:30:56 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:29:02.500 09:30:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:02.500 09:30:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.500 09:30:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.500 09:30:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.500 09:30:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:02.500 09:30:56 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:29:02.500 09:30:56 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:02.759 Running I/O for 1 seconds... 00:29:03.695 9554.00 IOPS, 37.32 MiB/s 00:29:03.695 Latency(us) 00:29:03.695 [2024-12-13T09:30:57.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.695 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:03.695 nvme0n1 : 1.05 9221.75 36.02 0.00 0.00 13630.05 5064.15 55526.87 00:29:03.695 [2024-12-13T09:30:57.585Z] =================================================================================================================== 00:29:03.695 [2024-12-13T09:30:57.585Z] Total : 9221.75 36.02 0.00 0.00 13630.05 5064.15 55526.87 00:29:03.695 { 00:29:03.695 "results": [ 00:29:03.695 { 00:29:03.695 "job": "nvme0n1", 00:29:03.695 "core_mask": "0x2", 00:29:03.695 "workload": "randrw", 00:29:03.695 "percentage": 50, 00:29:03.695 "status": "finished", 00:29:03.695 "queue_depth": 128, 00:29:03.695 "io_size": 4096, 00:29:03.695 "runtime": 1.050018, 00:29:03.695 "iops": 9221.746674818907, 00:29:03.695 "mibps": 36.022447948511356, 00:29:03.695 "io_failed": 0, 00:29:03.695 "io_timeout": 0, 00:29:03.695 "avg_latency_us": 13630.045768685512, 00:29:03.695 "min_latency_us": 5064.145454545454, 00:29:03.695 "max_latency_us": 55526.865454545456 00:29:03.695 } 00:29:03.695 ], 00:29:03.695 "core_count": 1 00:29:03.695 } 00:29:03.695 09:30:57 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:03.695 09:30:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:03.954 09:30:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:29:03.954 09:30:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:03.954 09:30:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:03.954 09:30:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.954 09:30:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.954 09:30:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.213 09:30:58 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:04.213 09:30:58 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:29:04.213 09:30:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:04.213 09:30:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.213 09:30:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.213 09:30:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.213 09:30:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:04.472 09:30:58 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:29:04.472 09:30:58 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:04.472 09:30:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:04.472 09:30:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:04.472 09:30:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:04.472 09:30:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.472 09:30:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:04.472 09:30:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.472 09:30:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:04.472 09:30:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:04.731 [2024-12-13 09:30:58.603819] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-12-13 09:30:58.603825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:04.731 (107): Transport endpoint is not connected 00:29:04.731 [2024-12-13 09:30:58.604800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:29:04.731 [2024-12-13 09:30:58.605792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:04.731 [2024-12-13 09:30:58.605827] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:04.731 [2024-12-13 09:30:58.605858] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:04.732 [2024-12-13 09:30:58.605871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:04.732 request: 00:29:04.732 { 00:29:04.732 "name": "nvme0", 00:29:04.732 "trtype": "tcp", 00:29:04.732 "traddr": "127.0.0.1", 00:29:04.732 "adrfam": "ipv4", 00:29:04.732 "trsvcid": "4420", 00:29:04.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:04.732 "prchk_reftag": false, 00:29:04.732 "prchk_guard": false, 00:29:04.732 "hdgst": false, 00:29:04.732 "ddgst": false, 00:29:04.732 "psk": "key1", 00:29:04.732 "allow_unrecognized_csi": false, 00:29:04.732 "method": "bdev_nvme_attach_controller", 00:29:04.732 "req_id": 1 00:29:04.732 } 00:29:04.732 Got JSON-RPC error response 00:29:04.732 response: 00:29:04.732 { 00:29:04.732 "code": -5, 00:29:04.732 "message": "Input/output error" 00:29:04.732 } 00:29:04.732 09:30:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:04.732 09:30:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.732 09:30:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.732 09:30:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.990 09:30:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:29:04.990 09:30:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:04.990 09:30:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.990 09:30:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.990 09:30:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.990 09:30:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.990 09:30:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:04.990 09:30:58 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:29:04.990 09:30:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.990 09:30:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:04.991 09:30:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.991 09:30:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:04.991 09:30:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.249 09:30:59 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:29:05.249 09:30:59 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:29:05.249 09:30:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:05.508 09:30:59 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:29:05.508 09:30:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:05.767 09:30:59 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:29:05.767 09:30:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.767 09:30:59 keyring_file -- keyring/file.sh@78 -- # jq length 00:29:06.026 09:30:59 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:29:06.026 09:30:59 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.S0R8EiAqKZ 00:29:06.026 09:30:59 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.S0R8EiAqKZ 00:29:06.026 09:30:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:06.026 09:30:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.S0R8EiAqKZ 00:29:06.026 09:30:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:06.026 09:30:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.026 09:30:59 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:06.026 09:30:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.026 09:30:59 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S0R8EiAqKZ 00:29:06.026 09:30:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S0R8EiAqKZ 00:29:06.285 [2024-12-13 09:31:00.150600] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.S0R8EiAqKZ': 0100660 00:29:06.285 [2024-12-13 09:31:00.150951] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:06.285 request: 00:29:06.285 { 00:29:06.285 "name": "key0", 00:29:06.285 "path": "/tmp/tmp.S0R8EiAqKZ", 00:29:06.285 "method": "keyring_file_add_key", 00:29:06.285 "req_id": 1 00:29:06.285 } 00:29:06.285 Got JSON-RPC error response 00:29:06.285 response: 00:29:06.285 { 00:29:06.285 "code": -1, 00:29:06.285 "message": "Operation not permitted" 00:29:06.285 } 00:29:06.285 09:31:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:06.285 09:31:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:06.285 09:31:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:06.285 09:31:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:06.285 09:31:00 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.S0R8EiAqKZ 00:29:06.544 09:31:00 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S0R8EiAqKZ 00:29:06.544 09:31:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S0R8EiAqKZ 00:29:06.544 09:31:00 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.S0R8EiAqKZ 00:29:06.544 09:31:00 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:29:06.544 09:31:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:06.544 09:31:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:06.544 09:31:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:06.544 09:31:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.544 09:31:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.803 09:31:00 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:29:06.803 09:31:00 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:06.803 09:31:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:29:06.803 09:31:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:06.803 09:31:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:06.803 09:31:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.803 09:31:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:06.803 09:31:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:06.803 09:31:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:06.803 09:31:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:07.062 [2024-12-13 09:31:00.918891] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.S0R8EiAqKZ': No such file or directory 00:29:07.062 [2024-12-13 09:31:00.918974] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:07.062 [2024-12-13 09:31:00.919000] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:07.062 [2024-12-13 09:31:00.919014] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:29:07.062 [2024-12-13 09:31:00.919026] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:07.062 [2024-12-13 09:31:00.919039] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:07.062 request: 00:29:07.062 { 00:29:07.062 "name": "nvme0", 00:29:07.062 "trtype": "tcp", 00:29:07.062 "traddr": "127.0.0.1", 00:29:07.062 "adrfam": "ipv4", 00:29:07.062 "trsvcid": "4420", 00:29:07.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:07.062 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:07.062 "prchk_reftag": false, 00:29:07.062 "prchk_guard": false, 00:29:07.062 "hdgst": false, 00:29:07.062 "ddgst": false, 00:29:07.062 "psk": "key0", 00:29:07.062 "allow_unrecognized_csi": false, 00:29:07.062 "method": "bdev_nvme_attach_controller", 00:29:07.062 "req_id": 1 00:29:07.062 } 00:29:07.062 Got JSON-RPC error response 00:29:07.062 response: 00:29:07.062 { 00:29:07.062 "code": -19, 00:29:07.062 "message": "No such device" 00:29:07.062 } 00:29:07.062 09:31:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:29:07.062 09:31:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:07.062 09:31:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:07.062 09:31:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:07.062 09:31:00 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:29:07.062 09:31:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:07.628 09:31:01 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5BQ3k5Gdes 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:07.628 09:31:01 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:07.628 09:31:01 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:29:07.628 09:31:01 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:07.628 09:31:01 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:07.628 09:31:01 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:29:07.628 09:31:01 keyring_file -- nvmf/common.sh@733 -- # python - 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5BQ3k5Gdes 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5BQ3k5Gdes 00:29:07.628 09:31:01 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.5BQ3k5Gdes 00:29:07.628 09:31:01 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5BQ3k5Gdes 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5BQ3k5Gdes 00:29:07.628 09:31:01 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:07.628 09:31:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.196 nvme0n1 00:29:08.196 09:31:01 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:29:08.196 09:31:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:08.196 09:31:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:08.196 09:31:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.196 09:31:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.196 09:31:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:08.455 09:31:02 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:29:08.455 09:31:02 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:29:08.455 09:31:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:08.714 09:31:02 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:29:08.714 09:31:02 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:29:08.714 09:31:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:08.714 09:31:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.714 09:31:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.973 09:31:02 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:29:08.973 09:31:02 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:29:08.973 09:31:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:08.973 09:31:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:08.973 09:31:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.973 09:31:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:08.973 09:31:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.232 09:31:02 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:29:09.232 09:31:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:09.232 09:31:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:09.491 09:31:03 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:29:09.491 09:31:03 keyring_file -- keyring/file.sh@105 -- # jq length 00:29:09.491 09:31:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.491 09:31:03 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:29:09.491 09:31:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5BQ3k5Gdes 00:29:09.491 09:31:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5BQ3k5Gdes 00:29:09.750 09:31:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Czhsqs6LKZ 00:29:09.750 09:31:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Czhsqs6LKZ 00:29:10.009 09:31:03 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:10.009 09:31:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:10.268 nvme0n1 00:29:10.268 09:31:04 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:29:10.268 09:31:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:10.527 09:31:04 keyring_file -- keyring/file.sh@113 -- # config='{ 00:29:10.527 "subsystems": [ 00:29:10.527 { 00:29:10.527 "subsystem": "keyring", 00:29:10.527 "config": [ 00:29:10.527 { 00:29:10.527 "method": "keyring_file_add_key", 00:29:10.527 "params": { 00:29:10.527 "name": "key0", 00:29:10.527 "path": "/tmp/tmp.5BQ3k5Gdes" 00:29:10.527 } 00:29:10.527 }, 00:29:10.527 { 00:29:10.527 "method": "keyring_file_add_key", 00:29:10.527 "params": { 00:29:10.527 "name": "key1", 00:29:10.527 "path": "/tmp/tmp.Czhsqs6LKZ" 00:29:10.527 } 00:29:10.527 } 00:29:10.527 ] 00:29:10.527 }, 00:29:10.527 { 00:29:10.527 "subsystem": "iobuf", 00:29:10.527 "config": [ 00:29:10.527 { 00:29:10.527 "method": "iobuf_set_options", 00:29:10.527 "params": { 00:29:10.527 "small_pool_count": 8192, 00:29:10.527 "large_pool_count": 1024, 00:29:10.527 "small_bufsize": 8192, 00:29:10.527 "large_bufsize": 135168, 00:29:10.527 "enable_numa": false 00:29:10.527 } 00:29:10.527 } 00:29:10.527 ] 00:29:10.527 }, 00:29:10.527 { 00:29:10.527 "subsystem": "sock", 00:29:10.527 "config": [ 00:29:10.527 { 00:29:10.527 "method": "sock_set_default_impl", 00:29:10.527 "params": { 00:29:10.527 "impl_name": "uring" 00:29:10.527 } 00:29:10.527 }, 00:29:10.527 { 00:29:10.527 "method": "sock_impl_set_options", 00:29:10.527 "params": { 00:29:10.527 "impl_name": "ssl", 00:29:10.527 "recv_buf_size": 4096, 00:29:10.527 "send_buf_size": 4096, 00:29:10.527 "enable_recv_pipe": true, 00:29:10.527 "enable_quickack": false, 00:29:10.527 "enable_placement_id": 0, 00:29:10.527 "enable_zerocopy_send_server": true, 00:29:10.527 "enable_zerocopy_send_client": false, 00:29:10.527 "zerocopy_threshold": 0, 00:29:10.527 "tls_version": 0, 00:29:10.527 "enable_ktls": false 00:29:10.527 } 00:29:10.527 }, 00:29:10.527 { 00:29:10.527 "method": "sock_impl_set_options", 00:29:10.527 "params": { 00:29:10.527 "impl_name": "posix", 00:29:10.528 "recv_buf_size": 2097152, 00:29:10.528 "send_buf_size": 2097152, 00:29:10.528 "enable_recv_pipe": true, 00:29:10.528 "enable_quickack": false, 00:29:10.528 "enable_placement_id": 0, 00:29:10.528 "enable_zerocopy_send_server": true, 00:29:10.528 "enable_zerocopy_send_client": false, 00:29:10.528 "zerocopy_threshold": 0, 00:29:10.528 "tls_version": 0, 00:29:10.528 "enable_ktls": false 00:29:10.528 } 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "method": "sock_impl_set_options", 00:29:10.528 "params": { 00:29:10.528 "impl_name": "uring", 00:29:10.528 "recv_buf_size": 2097152, 00:29:10.528 "send_buf_size": 2097152, 00:29:10.528 "enable_recv_pipe": true, 00:29:10.528 "enable_quickack": false, 00:29:10.528 "enable_placement_id": 0, 00:29:10.528 "enable_zerocopy_send_server": false, 00:29:10.528 "enable_zerocopy_send_client": false, 00:29:10.528 "zerocopy_threshold": 0, 00:29:10.528 "tls_version": 0, 00:29:10.528 "enable_ktls": false 00:29:10.528 } 00:29:10.528 } 00:29:10.528 ] 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "subsystem": "vmd", 00:29:10.528 "config": [] 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "subsystem": "accel", 00:29:10.528 "config": [ 00:29:10.528 { 00:29:10.528 "method": "accel_set_options", 00:29:10.528 "params": { 00:29:10.528 "small_cache_size": 128, 00:29:10.528 "large_cache_size": 16, 00:29:10.528 "task_count": 2048, 00:29:10.528 "sequence_count": 2048, 00:29:10.528 "buf_count": 2048 00:29:10.528 } 00:29:10.528 } 00:29:10.528 ] 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "subsystem": "bdev", 00:29:10.528 "config": [ 00:29:10.528 { 00:29:10.528 "method": "bdev_set_options", 00:29:10.528 "params": { 00:29:10.528 "bdev_io_pool_size": 65535, 00:29:10.528 "bdev_io_cache_size": 256, 00:29:10.528 "bdev_auto_examine": true, 00:29:10.528 "iobuf_small_cache_size": 128, 00:29:10.528 "iobuf_large_cache_size": 16 00:29:10.528 } 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "method": "bdev_raid_set_options", 00:29:10.528 "params": { 00:29:10.528 "process_window_size_kb": 1024, 00:29:10.528 "process_max_bandwidth_mb_sec": 0 00:29:10.528 } 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "method": "bdev_iscsi_set_options", 00:29:10.528 "params": { 00:29:10.528 "timeout_sec": 30 00:29:10.528 } 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "method": "bdev_nvme_set_options", 00:29:10.528 "params": { 00:29:10.528 "action_on_timeout": "none", 00:29:10.528 "timeout_us": 0, 00:29:10.528 "timeout_admin_us": 0, 00:29:10.528 "keep_alive_timeout_ms": 10000, 00:29:10.528 "arbitration_burst": 0, 00:29:10.528 "low_priority_weight": 0, 00:29:10.528 "medium_priority_weight": 0, 00:29:10.528 "high_priority_weight": 0, 00:29:10.528 "nvme_adminq_poll_period_us": 10000, 00:29:10.528 "nvme_ioq_poll_period_us": 0, 00:29:10.528 "io_queue_requests": 512, 00:29:10.528 "delay_cmd_submit": true, 00:29:10.528 "transport_retry_count": 4, 00:29:10.528 "bdev_retry_count": 3, 00:29:10.528 "transport_ack_timeout": 0, 00:29:10.528 "ctrlr_loss_timeout_sec": 0, 00:29:10.528 "reconnect_delay_sec": 0, 00:29:10.528 "fast_io_fail_timeout_sec": 0, 00:29:10.528 "disable_auto_failback": false, 00:29:10.528 "generate_uuids": false, 00:29:10.528 "transport_tos": 0, 00:29:10.528 "nvme_error_stat": false, 00:29:10.528 "rdma_srq_size": 0, 00:29:10.528 "io_path_stat": false, 00:29:10.528 "allow_accel_sequence": false, 00:29:10.528 "rdma_max_cq_size": 0, 00:29:10.528 "rdma_cm_event_timeout_ms": 0, 00:29:10.528 "dhchap_digests": [ 00:29:10.528 "sha256", 00:29:10.528 "sha384", 00:29:10.528 "sha512" 00:29:10.528 ], 00:29:10.528 "dhchap_dhgroups": [ 00:29:10.528 "null", 00:29:10.528 "ffdhe2048", 00:29:10.528 "ffdhe3072", 00:29:10.528 "ffdhe4096", 00:29:10.528 "ffdhe6144", 00:29:10.528 "ffdhe8192" 00:29:10.528 ], 00:29:10.528 "rdma_umr_per_io": false 00:29:10.528 } 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "method": "bdev_nvme_attach_controller", 00:29:10.528 "params": { 00:29:10.528 "name": "nvme0", 00:29:10.528 "trtype": "TCP", 00:29:10.528 "adrfam": "IPv4", 00:29:10.528 "traddr": "127.0.0.1", 00:29:10.528 "trsvcid": "4420", 00:29:10.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.528 "prchk_reftag": false, 00:29:10.528 "prchk_guard": false, 00:29:10.528 "ctrlr_loss_timeout_sec": 0, 00:29:10.528 "reconnect_delay_sec": 0, 00:29:10.528 "fast_io_fail_timeout_sec": 0, 00:29:10.528 "psk": "key0", 00:29:10.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:10.528 "hdgst": false, 00:29:10.528 "ddgst": false, 00:29:10.528 "multipath": "multipath" 00:29:10.528 } 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "method": "bdev_nvme_set_hotplug", 00:29:10.528 "params": { 00:29:10.528 "period_us": 100000, 00:29:10.528 "enable": false 00:29:10.528 } 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "method": "bdev_wait_for_examine" 00:29:10.528 } 00:29:10.528 ] 00:29:10.528 }, 00:29:10.528 { 00:29:10.528 "subsystem": "nbd", 00:29:10.528 "config": [] 00:29:10.528 } 00:29:10.528 ] 00:29:10.528 }' 00:29:10.528 09:31:04 keyring_file -- keyring/file.sh@115 -- # killprocess 93503 00:29:10.528 09:31:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 93503 ']' 00:29:10.528 09:31:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 93503 00:29:10.528 09:31:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:10.787 09:31:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.787 09:31:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93503 00:29:10.787 killing process with pid 93503 00:29:10.787 Received shutdown signal, test time was about 1.000000 seconds 00:29:10.787 00:29:10.788 Latency(us) 00:29:10.788 [2024-12-13T09:31:04.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.788 [2024-12-13T09:31:04.678Z] =================================================================================================================== 00:29:10.788 [2024-12-13T09:31:04.678Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.788 09:31:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.788 09:31:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.788 09:31:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93503' 00:29:10.788 09:31:04 keyring_file -- common/autotest_common.sh@973 -- # kill 93503 00:29:10.788 09:31:04 keyring_file -- common/autotest_common.sh@978 -- # wait 93503 00:29:11.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:11.356 09:31:05 keyring_file -- keyring/file.sh@118 -- # bperfpid=93760 00:29:11.356 09:31:05 keyring_file -- keyring/file.sh@120 -- # waitforlisten 93760 /var/tmp/bperf.sock 00:29:11.356 09:31:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 93760 ']' 00:29:11.356 09:31:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:11.356 09:31:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.356 09:31:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:11.356 09:31:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.356 09:31:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:11.356 09:31:05 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:11.356 09:31:05 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:29:11.356 "subsystems": [ 00:29:11.356 { 00:29:11.356 "subsystem": "keyring", 00:29:11.356 "config": [ 00:29:11.356 { 00:29:11.356 "method": "keyring_file_add_key", 00:29:11.356 "params": { 00:29:11.356 "name": "key0", 00:29:11.356 "path": "/tmp/tmp.5BQ3k5Gdes" 00:29:11.356 } 00:29:11.356 }, 00:29:11.356 { 00:29:11.356 "method": "keyring_file_add_key", 00:29:11.356 "params": { 00:29:11.356 "name": "key1", 00:29:11.356 "path": "/tmp/tmp.Czhsqs6LKZ" 00:29:11.356 } 00:29:11.356 } 00:29:11.356 ] 00:29:11.356 }, 00:29:11.356 { 00:29:11.356 "subsystem": "iobuf", 00:29:11.356 "config": [ 00:29:11.356 { 00:29:11.356 "method": "iobuf_set_options", 00:29:11.356 "params": { 00:29:11.356 "small_pool_count": 8192, 00:29:11.356 "large_pool_count": 1024, 00:29:11.356 "small_bufsize": 8192, 00:29:11.356 "large_bufsize": 135168, 00:29:11.356 "enable_numa": false 00:29:11.356 } 00:29:11.356 } 00:29:11.356 ] 00:29:11.356 }, 00:29:11.356 { 00:29:11.356 "subsystem": "sock", 00:29:11.356 "config": [ 00:29:11.356 { 00:29:11.356 "method": "sock_set_default_impl", 00:29:11.356 "params": { 00:29:11.356 "impl_name": "uring" 00:29:11.356 } 00:29:11.356 }, 00:29:11.356 { 00:29:11.356 "method": "sock_impl_set_options", 00:29:11.356 "params": { 00:29:11.356 "impl_name": "ssl", 00:29:11.356 "recv_buf_size": 4096, 00:29:11.356 "send_buf_size": 4096, 00:29:11.356 "enable_recv_pipe": true, 00:29:11.356 "enable_quickack": false, 00:29:11.356 "enable_placement_id": 0, 00:29:11.356 "enable_zerocopy_send_server": true, 00:29:11.356 "enable_zerocopy_send_client": false, 00:29:11.356 "zerocopy_threshold": 0, 00:29:11.356 "tls_version": 0, 00:29:11.356 "enable_ktls": false 00:29:11.356 } 00:29:11.356 }, 00:29:11.356 { 00:29:11.356 "method": "sock_impl_set_options", 00:29:11.356 "params": { 00:29:11.356 "impl_name": "posix", 00:29:11.356 "recv_buf_size": 2097152, 00:29:11.356 "send_buf_size": 2097152, 00:29:11.356 "enable_recv_pipe": true, 00:29:11.356 "enable_quickack": false, 00:29:11.356 "enable_placement_id": 0, 00:29:11.356 "enable_zerocopy_send_server": true, 00:29:11.356 "enable_zerocopy_send_client": false, 00:29:11.356 "zerocopy_threshold": 0, 00:29:11.356 "tls_version": 0, 00:29:11.356 "enable_ktls": false 00:29:11.356 } 00:29:11.356 }, 00:29:11.356 { 00:29:11.356 "method": "sock_impl_set_options", 00:29:11.356 "params": { 00:29:11.356 "impl_name": "uring", 00:29:11.356 "recv_buf_size": 2097152, 00:29:11.356 "send_buf_size": 2097152, 00:29:11.356 "enable_recv_pipe": true, 00:29:11.356 "enable_quickack": false, 00:29:11.356 "enable_placement_id": 0, 00:29:11.356 "enable_zerocopy_send_server": false, 00:29:11.356 "enable_zerocopy_send_client": false, 00:29:11.356 "zerocopy_threshold": 0, 00:29:11.356 "tls_version": 0, 00:29:11.356 "enable_ktls": false 00:29:11.356 } 00:29:11.356 } 00:29:11.356 ] 00:29:11.356 }, 00:29:11.356 { 00:29:11.356 "subsystem": "vmd", 00:29:11.356 "config": [] 00:29:11.356 }, 00:29:11.356 { 00:29:11.356 "subsystem": "accel", 00:29:11.356 "config": [ 00:29:11.356 { 00:29:11.356 "method": "accel_set_options", 00:29:11.357 "params": { 00:29:11.357 "small_cache_size": 128, 00:29:11.357 "large_cache_size": 16, 00:29:11.357 "task_count": 2048, 00:29:11.357 "sequence_count": 2048, 00:29:11.357 "buf_count": 2048 00:29:11.357 } 00:29:11.357 } 00:29:11.357 ] 00:29:11.357 }, 00:29:11.357 { 00:29:11.357 "subsystem": "bdev", 00:29:11.357 "config": [ 00:29:11.357 { 00:29:11.357 "method": "bdev_set_options", 00:29:11.357 "params": { 00:29:11.357 "bdev_io_pool_size": 65535, 00:29:11.357 "bdev_io_cache_size": 256, 00:29:11.357 "bdev_auto_examine": true, 00:29:11.357 "iobuf_small_cache_size": 128, 00:29:11.357 "iobuf_large_cache_size": 16 00:29:11.357 } 00:29:11.357 }, 00:29:11.357 { 00:29:11.357 "method": "bdev_raid_set_options", 00:29:11.357 "params": { 00:29:11.357 "process_window_size_kb": 1024, 00:29:11.357 "process_max_bandwidth_mb_sec": 0 00:29:11.357 } 00:29:11.357 }, 00:29:11.357 { 00:29:11.357 "method": "bdev_iscsi_set_options", 00:29:11.357 "params": { 00:29:11.357 "timeout_sec": 30 00:29:11.357 } 00:29:11.357 }, 00:29:11.357 { 00:29:11.357 "method": "bdev_nvme_set_options", 00:29:11.357 "params": { 00:29:11.357 "action_on_timeout": "none", 00:29:11.357 "timeout_us": 0, 00:29:11.357 "timeout_admin_us": 0, 00:29:11.357 "keep_alive_timeout_ms": 10000, 00:29:11.357 "arbitration_burst": 0, 00:29:11.357 "low_priority_weight": 0, 00:29:11.357 "medium_priority_weight": 0, 00:29:11.357 "high_priority_weight": 0, 00:29:11.357 "nvme_adminq_poll_period_us": 10000, 00:29:11.357 "nvme_ioq_poll_period_us": 0, 00:29:11.357 "io_queue_requests": 512, 00:29:11.357 "delay_cmd_submit": true, 00:29:11.357 "transport_retry_count": 4, 00:29:11.357 "bdev_retry_count": 3, 00:29:11.357 "transport_ack_timeout": 0, 00:29:11.357 "ctrlr_loss_timeout_sec": 0, 00:29:11.357 "reconnect_delay_sec": 0, 00:29:11.357 "fast_io_fail_timeout_sec": 0, 00:29:11.357 "disable_auto_failback": false, 00:29:11.357 "generate_uuids": false, 00:29:11.357 "transport_tos": 0, 00:29:11.357 "nvme_error_stat": false, 00:29:11.357 "rdma_srq_size": 0, 00:29:11.357 "io_path_stat": false, 00:29:11.357 "allow_accel_sequence": false, 00:29:11.357 "rdma_max_cq_size": 0, 00:29:11.357 "rdma_cm_event_timeout_ms": 0, 00:29:11.357 "dhchap_digests": [ 00:29:11.357 "sha256", 00:29:11.357 "sha384", 00:29:11.357 "sha512" 00:29:11.357 ], 00:29:11.357 "dhchap_dhgroups": [ 00:29:11.357 "null", 00:29:11.357 "ffdhe2048", 00:29:11.357 "ffdhe3072", 00:29:11.357 "ffdhe4096", 00:29:11.357 "ffdhe6144", 00:29:11.357 "ffdhe8192" 00:29:11.357 ], 00:29:11.357 "rdma_umr_per_io": false 00:29:11.357 } 00:29:11.357 }, 00:29:11.357 { 00:29:11.357 "method": "bdev_nvme_attach_controller", 00:29:11.357 "params": { 00:29:11.357 "name": "nvme0", 00:29:11.357 "trtype": "TCP", 00:29:11.357 "adrfam": "IPv4", 00:29:11.357 "traddr": "127.0.0.1", 00:29:11.357 "trsvcid": "4420", 00:29:11.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.357 "prchk_reftag": false, 00:29:11.357 "prchk_guard": false, 00:29:11.357 "ctrlr_loss_timeout_sec": 0, 00:29:11.357 "reconnect_delay_sec": 0, 00:29:11.357 "fast_io_fail_timeout_sec": 0, 00:29:11.357 "psk": "key0", 00:29:11.357 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:11.357 "hdgst": false, 00:29:11.357 "ddgst": false, 00:29:11.357 "multipath": "multipath" 00:29:11.357 } 00:29:11.357 }, 00:29:11.357 { 00:29:11.357 "method": "bdev_nvme_set_hotplug", 00:29:11.357 "params": { 00:29:11.357 "period_us": 100000, 00:29:11.357 "enable": false 00:29:11.357 } 00:29:11.357 }, 00:29:11.357 { 00:29:11.357 "method": "bdev_wait_for_examine" 00:29:11.357 } 00:29:11.357 ] 00:29:11.357 }, 00:29:11.357 { 00:29:11.357 "subsystem": "nbd", 00:29:11.357 "config": [] 00:29:11.357 } 00:29:11.357 ] 00:29:11.357 }' 00:29:11.616 [2024-12-13 09:31:05.313312] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:11.616 [2024-12-13 09:31:05.314411] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93760 ] 00:29:11.616 [2024-12-13 09:31:05.490679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.874 [2024-12-13 09:31:05.572556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.133 [2024-12-13 09:31:05.803373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:12.133 [2024-12-13 09:31:05.909914] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:12.391 09:31:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.391 09:31:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:12.391 09:31:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:29:12.391 09:31:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:29:12.391 09:31:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.661 09:31:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:12.661 09:31:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:29:12.661 09:31:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.661 09:31:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:12.661 09:31:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.661 09:31:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.661 09:31:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.961 09:31:06 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:29:12.961 09:31:06 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:29:12.961 09:31:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:12.961 09:31:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.961 09:31:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.961 09:31:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.961 09:31:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:13.245 09:31:07 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:29:13.245 09:31:07 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:29:13.245 09:31:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:13.245 09:31:07 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:29:13.505 09:31:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:29:13.505 09:31:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:13.505 09:31:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.5BQ3k5Gdes /tmp/tmp.Czhsqs6LKZ 00:29:13.505 09:31:07 keyring_file -- keyring/file.sh@20 -- # killprocess 93760 00:29:13.505 09:31:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 93760 ']' 00:29:13.505 09:31:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 93760 00:29:13.505 09:31:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:13.505 09:31:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.505 09:31:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93760 00:29:13.505 killing process with pid 93760 00:29:13.505 Received shutdown signal, test time was about 1.000000 seconds 00:29:13.505 00:29:13.505 Latency(us) 00:29:13.505 [2024-12-13T09:31:07.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.505 [2024-12-13T09:31:07.395Z] =================================================================================================================== 00:29:13.505 [2024-12-13T09:31:07.395Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:13.505 09:31:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.505 09:31:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.505 09:31:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93760' 00:29:13.505 09:31:07 keyring_file -- common/autotest_common.sh@973 -- # kill 93760 00:29:13.505 09:31:07 keyring_file -- common/autotest_common.sh@978 -- # wait 93760 00:29:14.442 09:31:08 keyring_file -- keyring/file.sh@21 -- # killprocess 93490 00:29:14.442 09:31:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 93490 ']' 00:29:14.442 09:31:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 93490 00:29:14.442 09:31:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:14.442 09:31:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.442 09:31:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93490 00:29:14.442 killing process with pid 93490 00:29:14.442 09:31:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:14.442 09:31:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:14.442 09:31:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93490' 00:29:14.442 09:31:08 keyring_file -- common/autotest_common.sh@973 -- # kill 93490 00:29:14.442 09:31:08 keyring_file -- common/autotest_common.sh@978 -- # wait 93490 00:29:16.349 00:29:16.349 real 0m18.521s 00:29:16.349 user 0m43.708s 00:29:16.349 sys 0m2.876s 00:29:16.349 ************************************ 00:29:16.349 END TEST keyring_file 00:29:16.349 ************************************ 00:29:16.349 09:31:09 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.349 09:31:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:16.349 09:31:09 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:29:16.349 09:31:09 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:16.349 09:31:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:16.349 09:31:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.349 09:31:09 -- common/autotest_common.sh@10 -- # set +x 00:29:16.349 ************************************ 00:29:16.349 START TEST keyring_linux 00:29:16.349 ************************************ 00:29:16.349 09:31:09 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:16.349 Joined session keyring: 408461278 00:29:16.349 * Looking for test storage... 00:29:16.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:16.349 09:31:09 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:16.349 09:31:09 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:29:16.349 09:31:09 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:16.349 09:31:10 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@345 -- # : 1 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@368 -- # return 0 00:29:16.349 09:31:10 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.349 09:31:10 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:16.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.349 --rc genhtml_branch_coverage=1 00:29:16.349 --rc genhtml_function_coverage=1 00:29:16.349 --rc genhtml_legend=1 00:29:16.349 --rc geninfo_all_blocks=1 00:29:16.349 --rc geninfo_unexecuted_blocks=1 00:29:16.349 00:29:16.349 ' 00:29:16.349 09:31:10 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:16.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.349 --rc genhtml_branch_coverage=1 00:29:16.349 --rc genhtml_function_coverage=1 00:29:16.349 --rc genhtml_legend=1 00:29:16.349 --rc geninfo_all_blocks=1 00:29:16.349 --rc geninfo_unexecuted_blocks=1 00:29:16.349 00:29:16.349 ' 00:29:16.349 09:31:10 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:16.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.349 --rc genhtml_branch_coverage=1 00:29:16.349 --rc genhtml_function_coverage=1 00:29:16.349 --rc genhtml_legend=1 00:29:16.349 --rc geninfo_all_blocks=1 00:29:16.349 --rc geninfo_unexecuted_blocks=1 00:29:16.349 00:29:16.349 ' 00:29:16.349 09:31:10 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:16.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.349 --rc genhtml_branch_coverage=1 00:29:16.349 --rc genhtml_function_coverage=1 00:29:16.349 --rc genhtml_legend=1 00:29:16.349 --rc geninfo_all_blocks=1 00:29:16.349 --rc geninfo_unexecuted_blocks=1 00:29:16.349 00:29:16.349 ' 00:29:16.349 09:31:10 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:16.349 09:31:10 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5267ba90-6d03-4c73-b69a-15b62f92a67a 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5267ba90-6d03-4c73-b69a-15b62f92a67a 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.349 09:31:10 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.349 09:31:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.349 09:31:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.349 09:31:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.349 09:31:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:16.349 09:31:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.349 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.349 09:31:10 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.349 09:31:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:16.349 09:31:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:16.349 09:31:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:16.349 09:31:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:16.349 09:31:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:16.349 09:31:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:16.349 09:31:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:16.350 /tmp/:spdk-test:key0 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:16.350 09:31:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:16.350 09:31:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:16.350 09:31:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:16.350 /tmp/:spdk-test:key1 00:29:16.350 09:31:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=93906 00:29:16.350 09:31:10 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.350 09:31:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 93906 00:29:16.350 09:31:10 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 93906 ']' 00:29:16.350 09:31:10 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.350 09:31:10 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.350 09:31:10 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.350 09:31:10 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.350 09:31:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:16.609 [2024-12-13 09:31:10.336038] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:16.609 [2024-12-13 09:31:10.336517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93906 ] 00:29:16.868 [2024-12-13 09:31:10.514272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.868 [2024-12-13 09:31:10.593983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.127 [2024-12-13 09:31:10.778185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:17.386 09:31:11 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.386 09:31:11 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:17.386 09:31:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:17.386 09:31:11 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.386 09:31:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:17.386 [2024-12-13 09:31:11.214515] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.386 null0 00:29:17.386 [2024-12-13 09:31:11.246481] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:17.386 [2024-12-13 09:31:11.246752] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:17.386 09:31:11 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.386 09:31:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:17.386 776265411 00:29:17.386 09:31:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:17.386 296103888 00:29:17.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.645 09:31:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=93923 00:29:17.645 09:31:11 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:17.645 09:31:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 93923 /var/tmp/bperf.sock 00:29:17.645 09:31:11 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 93923 ']' 00:29:17.645 09:31:11 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.645 09:31:11 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.645 09:31:11 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.645 09:31:11 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.645 09:31:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:17.645 [2024-12-13 09:31:11.359466] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:17.645 [2024-12-13 09:31:11.359769] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93923 ] 00:29:17.645 [2024-12-13 09:31:11.533714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.904 [2024-12-13 09:31:11.656772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.472 09:31:12 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.472 09:31:12 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:18.472 09:31:12 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:18.472 09:31:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:18.731 09:31:12 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:18.731 09:31:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:19.300 [2024-12-13 09:31:12.941102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:19.300 09:31:13 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:19.300 09:31:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:19.558 [2024-12-13 09:31:13.248767] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:19.558 nvme0n1 00:29:19.558 09:31:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:19.558 09:31:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:19.558 09:31:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:19.558 09:31:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:19.558 09:31:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:19.558 09:31:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:19.817 09:31:13 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:19.817 09:31:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:19.817 09:31:13 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:19.817 09:31:13 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:19.817 09:31:13 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:19.817 09:31:13 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:19.817 09:31:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.077 09:31:13 keyring_linux -- keyring/linux.sh@25 -- # sn=776265411 00:29:20.077 09:31:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:20.077 09:31:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:20.077 09:31:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 776265411 == \7\7\6\2\6\5\4\1\1 ]] 00:29:20.077 09:31:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 776265411 00:29:20.077 09:31:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:20.077 09:31:13 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:20.335 Running I/O for 1 seconds... 00:29:21.273 9576.00 IOPS, 37.41 MiB/s 00:29:21.273 Latency(us) 00:29:21.273 [2024-12-13T09:31:15.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.273 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:21.273 nvme0n1 : 1.01 9575.55 37.40 0.00 0.00 13272.05 4379.00 16801.05 00:29:21.273 [2024-12-13T09:31:15.163Z] =================================================================================================================== 00:29:21.273 [2024-12-13T09:31:15.163Z] Total : 9575.55 37.40 0.00 0.00 13272.05 4379.00 16801.05 00:29:21.273 { 00:29:21.273 "results": [ 00:29:21.273 { 00:29:21.273 "job": "nvme0n1", 00:29:21.273 "core_mask": "0x2", 00:29:21.273 "workload": "randread", 00:29:21.273 "status": "finished", 00:29:21.273 "queue_depth": 128, 00:29:21.273 "io_size": 4096, 00:29:21.273 "runtime": 1.013519, 00:29:21.273 "iops": 9575.548164365937, 00:29:21.273 "mibps": 37.40448501705444, 00:29:21.273 "io_failed": 0, 00:29:21.273 "io_timeout": 0, 00:29:21.273 "avg_latency_us": 13272.046782258443, 00:29:21.273 "min_latency_us": 4378.996363636364, 00:29:21.273 "max_latency_us": 16801.04727272727 00:29:21.273 } 00:29:21.273 ], 00:29:21.273 "core_count": 1 00:29:21.273 } 00:29:21.273 09:31:15 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:21.273 09:31:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:21.531 09:31:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:21.531 09:31:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:21.531 09:31:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:21.531 09:31:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:21.531 09:31:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:21.531 09:31:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:21.790 09:31:15 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:21.790 09:31:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:21.790 09:31:15 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:21.790 09:31:15 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:21.790 09:31:15 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:29:21.791 09:31:15 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:21.791 09:31:15 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:21.791 09:31:15 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:21.791 09:31:15 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:21.791 09:31:15 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:21.791 09:31:15 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:21.791 09:31:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:22.050 [2024-12-13 09:31:15.813136] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:22.050 [2024-12-13 09:31:15.813440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:29:22.050 [2024-12-13 09:31:15.814408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:29:22.050 [2024-12-13 09:31:15.815401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:22.050 [2024-12-13 09:31:15.815445] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:22.050 [2024-12-13 09:31:15.815462] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:22.050 [2024-12-13 09:31:15.815478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:22.050 request: 00:29:22.050 { 00:29:22.050 "name": "nvme0", 00:29:22.050 "trtype": "tcp", 00:29:22.050 "traddr": "127.0.0.1", 00:29:22.050 "adrfam": "ipv4", 00:29:22.050 "trsvcid": "4420", 00:29:22.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:22.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:22.050 "prchk_reftag": false, 00:29:22.050 "prchk_guard": false, 00:29:22.050 "hdgst": false, 00:29:22.050 "ddgst": false, 00:29:22.050 "psk": ":spdk-test:key1", 00:29:22.050 "allow_unrecognized_csi": false, 00:29:22.050 "method": "bdev_nvme_attach_controller", 00:29:22.050 "req_id": 1 00:29:22.050 } 00:29:22.050 Got JSON-RPC error response 00:29:22.050 response: 00:29:22.050 { 00:29:22.050 "code": -5, 00:29:22.050 "message": "Input/output error" 00:29:22.050 } 00:29:22.050 09:31:15 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:29:22.050 09:31:15 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:22.050 09:31:15 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:22.050 09:31:15 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:22.050 09:31:15 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:22.050 09:31:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:22.050 09:31:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:22.050 09:31:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:22.050 09:31:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:22.050 09:31:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:22.051 09:31:15 keyring_linux -- keyring/linux.sh@33 -- # sn=776265411 00:29:22.051 09:31:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 776265411 00:29:22.051 1 links removed 00:29:22.051 09:31:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:22.051 09:31:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:22.051 09:31:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:22.051 09:31:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:22.051 09:31:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:22.051 09:31:15 keyring_linux -- keyring/linux.sh@33 -- # sn=296103888 00:29:22.051 09:31:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 296103888 00:29:22.051 1 links removed 00:29:22.051 09:31:15 keyring_linux -- keyring/linux.sh@41 -- # killprocess 93923 00:29:22.051 09:31:15 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 93923 ']' 00:29:22.051 09:31:15 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 93923 00:29:22.051 09:31:15 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:22.051 09:31:15 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:22.051 09:31:15 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93923 00:29:22.051 killing process with pid 93923 00:29:22.051 Received shutdown signal, test time was about 1.000000 seconds 00:29:22.051 00:29:22.051 Latency(us) 00:29:22.051 [2024-12-13T09:31:15.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.051 [2024-12-13T09:31:15.941Z] =================================================================================================================== 00:29:22.051 [2024-12-13T09:31:15.941Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.051 09:31:15 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:22.051 09:31:15 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:22.051 09:31:15 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93923' 00:29:22.051 09:31:15 keyring_linux -- common/autotest_common.sh@973 -- # kill 93923 00:29:22.051 09:31:15 keyring_linux -- common/autotest_common.sh@978 -- # wait 93923 00:29:22.988 09:31:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 93906 00:29:22.988 09:31:16 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 93906 ']' 00:29:22.988 09:31:16 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 93906 00:29:22.988 09:31:16 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:22.988 09:31:16 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:22.988 09:31:16 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93906 00:29:22.988 killing process with pid 93906 00:29:22.988 09:31:16 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:22.988 09:31:16 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:22.988 09:31:16 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93906' 00:29:22.988 09:31:16 keyring_linux -- common/autotest_common.sh@973 -- # kill 93906 00:29:22.988 09:31:16 keyring_linux -- common/autotest_common.sh@978 -- # wait 93906 00:29:24.893 00:29:24.893 real 0m8.434s 00:29:24.893 user 0m15.176s 00:29:24.893 sys 0m1.504s 00:29:24.893 09:31:18 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.893 09:31:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:24.893 ************************************ 00:29:24.893 END TEST keyring_linux 00:29:24.893 ************************************ 00:29:24.893 09:31:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:24.893 09:31:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:24.893 09:31:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:24.893 09:31:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:24.893 09:31:18 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:29:24.893 09:31:18 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:29:24.893 09:31:18 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:29:24.893 09:31:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.893 09:31:18 -- common/autotest_common.sh@10 -- # set +x 00:29:24.893 09:31:18 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:29:24.893 09:31:18 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:29:24.893 09:31:18 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:24.893 09:31:18 -- common/autotest_common.sh@10 -- # set +x 00:29:26.798 INFO: APP EXITING 00:29:26.798 INFO: killing all VMs 00:29:26.798 INFO: killing vhost app 00:29:26.798 INFO: EXIT DONE 00:29:27.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:27.315 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:27.315 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:27.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:27.883 Cleaning 00:29:27.884 Removing: /var/run/dpdk/spdk0/config 00:29:27.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:27.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:27.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:27.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:27.884 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:27.884 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:27.884 Removing: /var/run/dpdk/spdk1/config 00:29:27.884 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:27.884 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:27.884 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:28.141 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:28.141 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:28.141 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:28.141 Removing: /var/run/dpdk/spdk2/config 00:29:28.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:28.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:28.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:28.141 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:28.141 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:28.141 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:28.141 Removing: /var/run/dpdk/spdk3/config 00:29:28.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:28.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:28.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:28.141 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:28.141 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:28.141 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:28.141 Removing: /var/run/dpdk/spdk4/config 00:29:28.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:28.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:28.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:28.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:28.141 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:28.141 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:28.141 Removing: /dev/shm/nvmf_trace.0 00:29:28.141 Removing: /dev/shm/spdk_tgt_trace.pid59271 00:29:28.141 Removing: /var/run/dpdk/spdk0 00:29:28.141 Removing: /var/run/dpdk/spdk1 00:29:28.141 Removing: /var/run/dpdk/spdk2 00:29:28.141 Removing: /var/run/dpdk/spdk3 00:29:28.141 Removing: /var/run/dpdk/spdk4 00:29:28.141 Removing: /var/run/dpdk/spdk_pid59058 00:29:28.141 Removing: /var/run/dpdk/spdk_pid59271 00:29:28.141 Removing: /var/run/dpdk/spdk_pid59489 00:29:28.141 Removing: /var/run/dpdk/spdk_pid59589 00:29:28.141 Removing: /var/run/dpdk/spdk_pid59633 00:29:28.141 Removing: /var/run/dpdk/spdk_pid59761 00:29:28.141 Removing: /var/run/dpdk/spdk_pid59779 00:29:28.141 Removing: /var/run/dpdk/spdk_pid59938 00:29:28.141 Removing: /var/run/dpdk/spdk_pid60152 00:29:28.141 Removing: /var/run/dpdk/spdk_pid60318 00:29:28.141 Removing: /var/run/dpdk/spdk_pid60424 00:29:28.141 Removing: /var/run/dpdk/spdk_pid60520 00:29:28.141 Removing: /var/run/dpdk/spdk_pid60631 00:29:28.141 Removing: /var/run/dpdk/spdk_pid60735 00:29:28.141 Removing: /var/run/dpdk/spdk_pid60773 00:29:28.141 Removing: /var/run/dpdk/spdk_pid60815 00:29:28.141 Removing: /var/run/dpdk/spdk_pid60880 00:29:28.141 Removing: /var/run/dpdk/spdk_pid60975 00:29:28.141 Removing: /var/run/dpdk/spdk_pid61439 00:29:28.141 Removing: /var/run/dpdk/spdk_pid61503 00:29:28.141 Removing: /var/run/dpdk/spdk_pid61566 00:29:28.141 Removing: /var/run/dpdk/spdk_pid61582 00:29:28.141 Removing: /var/run/dpdk/spdk_pid61709 00:29:28.141 Removing: /var/run/dpdk/spdk_pid61725 00:29:28.141 Removing: /var/run/dpdk/spdk_pid61852 00:29:28.141 Removing: /var/run/dpdk/spdk_pid61874 00:29:28.141 Removing: /var/run/dpdk/spdk_pid61942 00:29:28.141 Removing: /var/run/dpdk/spdk_pid61961 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62025 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62043 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62235 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62277 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62359 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62712 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62725 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62768 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62799 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62821 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62852 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62877 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62905 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62936 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62956 00:29:28.141 Removing: /var/run/dpdk/spdk_pid62984 00:29:28.141 Removing: /var/run/dpdk/spdk_pid63015 00:29:28.141 Removing: /var/run/dpdk/spdk_pid63040 00:29:28.141 Removing: /var/run/dpdk/spdk_pid63062 00:29:28.141 Removing: /var/run/dpdk/spdk_pid63093 00:29:28.141 Removing: /var/run/dpdk/spdk_pid63119 00:29:28.141 Removing: /var/run/dpdk/spdk_pid63146 00:29:28.141 Removing: /var/run/dpdk/spdk_pid63177 00:29:28.141 Removing: /var/run/dpdk/spdk_pid63203 00:29:28.141 Removing: /var/run/dpdk/spdk_pid63225 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63273 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63293 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63340 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63424 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63459 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63475 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63521 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63537 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63562 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63611 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63641 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63677 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63699 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63720 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63742 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63763 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63785 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63805 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63828 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63863 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63907 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63923 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63969 00:29:28.400 Removing: /var/run/dpdk/spdk_pid63985 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64010 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64057 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64086 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64121 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64139 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64158 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64178 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64197 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64217 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64232 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64256 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64344 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64427 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64585 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64629 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64682 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64714 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64742 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64769 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64819 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64841 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64931 00:29:28.400 Removing: /var/run/dpdk/spdk_pid64976 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65043 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65157 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65243 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65295 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65419 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65479 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65523 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65772 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65886 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65932 00:29:28.400 Removing: /var/run/dpdk/spdk_pid65963 00:29:28.400 Removing: /var/run/dpdk/spdk_pid66008 00:29:28.400 Removing: /var/run/dpdk/spdk_pid66058 00:29:28.400 Removing: /var/run/dpdk/spdk_pid66099 00:29:28.400 Removing: /var/run/dpdk/spdk_pid66143 00:29:28.400 Removing: /var/run/dpdk/spdk_pid66552 00:29:28.400 Removing: /var/run/dpdk/spdk_pid66592 00:29:28.400 Removing: /var/run/dpdk/spdk_pid66974 00:29:28.400 Removing: /var/run/dpdk/spdk_pid67458 00:29:28.400 Removing: /var/run/dpdk/spdk_pid67740 00:29:28.400 Removing: /var/run/dpdk/spdk_pid68664 00:29:28.400 Removing: /var/run/dpdk/spdk_pid69651 00:29:28.400 Removing: /var/run/dpdk/spdk_pid69781 00:29:28.400 Removing: /var/run/dpdk/spdk_pid69855 00:29:28.400 Removing: /var/run/dpdk/spdk_pid71327 00:29:28.400 Removing: /var/run/dpdk/spdk_pid71692 00:29:28.400 Removing: /var/run/dpdk/spdk_pid75442 00:29:28.400 Removing: /var/run/dpdk/spdk_pid75849 00:29:28.400 Removing: /var/run/dpdk/spdk_pid75963 00:29:28.400 Removing: /var/run/dpdk/spdk_pid76114 00:29:28.400 Removing: /var/run/dpdk/spdk_pid76149 00:29:28.400 Removing: /var/run/dpdk/spdk_pid76190 00:29:28.400 Removing: /var/run/dpdk/spdk_pid76225 00:29:28.400 Removing: /var/run/dpdk/spdk_pid76350 00:29:28.400 Removing: /var/run/dpdk/spdk_pid76494 00:29:28.400 Removing: /var/run/dpdk/spdk_pid76690 00:29:28.400 Removing: /var/run/dpdk/spdk_pid76785 00:29:28.400 Removing: /var/run/dpdk/spdk_pid76992 00:29:28.660 Removing: /var/run/dpdk/spdk_pid77093 00:29:28.660 Removing: /var/run/dpdk/spdk_pid77200 00:29:28.660 Removing: /var/run/dpdk/spdk_pid77576 00:29:28.660 Removing: /var/run/dpdk/spdk_pid78007 00:29:28.660 Removing: /var/run/dpdk/spdk_pid78008 00:29:28.660 Removing: /var/run/dpdk/spdk_pid78009 00:29:28.660 Removing: /var/run/dpdk/spdk_pid78294 00:29:28.660 Removing: /var/run/dpdk/spdk_pid78589 00:29:28.660 Removing: /var/run/dpdk/spdk_pid78597 00:29:28.660 Removing: /var/run/dpdk/spdk_pid80963 00:29:28.660 Removing: /var/run/dpdk/spdk_pid81379 00:29:28.660 Removing: /var/run/dpdk/spdk_pid81383 00:29:28.660 Removing: /var/run/dpdk/spdk_pid81719 00:29:28.660 Removing: /var/run/dpdk/spdk_pid81739 00:29:28.660 Removing: /var/run/dpdk/spdk_pid81759 00:29:28.660 Removing: /var/run/dpdk/spdk_pid81794 00:29:28.660 Removing: /var/run/dpdk/spdk_pid81804 00:29:28.660 Removing: /var/run/dpdk/spdk_pid81890 00:29:28.660 Removing: /var/run/dpdk/spdk_pid81897 00:29:28.660 Removing: /var/run/dpdk/spdk_pid82002 00:29:28.660 Removing: /var/run/dpdk/spdk_pid82011 00:29:28.660 Removing: /var/run/dpdk/spdk_pid82120 00:29:28.660 Removing: /var/run/dpdk/spdk_pid82123 00:29:28.660 Removing: /var/run/dpdk/spdk_pid82575 00:29:28.660 Removing: /var/run/dpdk/spdk_pid82617 00:29:28.660 Removing: /var/run/dpdk/spdk_pid82724 00:29:28.660 Removing: /var/run/dpdk/spdk_pid82795 00:29:28.660 Removing: /var/run/dpdk/spdk_pid83169 00:29:28.660 Removing: /var/run/dpdk/spdk_pid83372 00:29:28.660 Removing: /var/run/dpdk/spdk_pid83818 00:29:28.660 Removing: /var/run/dpdk/spdk_pid84388 00:29:28.660 Removing: /var/run/dpdk/spdk_pid85256 00:29:28.660 Removing: /var/run/dpdk/spdk_pid85916 00:29:28.660 Removing: /var/run/dpdk/spdk_pid85919 00:29:28.660 Removing: /var/run/dpdk/spdk_pid87953 00:29:28.660 Removing: /var/run/dpdk/spdk_pid88020 00:29:28.660 Removing: /var/run/dpdk/spdk_pid88088 00:29:28.660 Removing: /var/run/dpdk/spdk_pid88155 00:29:28.660 Removing: /var/run/dpdk/spdk_pid88285 00:29:28.660 Removing: /var/run/dpdk/spdk_pid88352 00:29:28.660 Removing: /var/run/dpdk/spdk_pid88413 00:29:28.660 Removing: /var/run/dpdk/spdk_pid88480 00:29:28.660 Removing: /var/run/dpdk/spdk_pid88866 00:29:28.660 Removing: /var/run/dpdk/spdk_pid90095 00:29:28.660 Removing: /var/run/dpdk/spdk_pid90244 00:29:28.660 Removing: /var/run/dpdk/spdk_pid90492 00:29:28.660 Removing: /var/run/dpdk/spdk_pid91103 00:29:28.660 Removing: /var/run/dpdk/spdk_pid91264 00:29:28.660 Removing: /var/run/dpdk/spdk_pid91425 00:29:28.660 Removing: /var/run/dpdk/spdk_pid91531 00:29:28.660 Removing: /var/run/dpdk/spdk_pid91686 00:29:28.660 Removing: /var/run/dpdk/spdk_pid91795 00:29:28.660 Removing: /var/run/dpdk/spdk_pid92520 00:29:28.660 Removing: /var/run/dpdk/spdk_pid92562 00:29:28.660 Removing: /var/run/dpdk/spdk_pid92594 00:29:28.660 Removing: /var/run/dpdk/spdk_pid92958 00:29:28.660 Removing: /var/run/dpdk/spdk_pid92989 00:29:28.660 Removing: /var/run/dpdk/spdk_pid93024 00:29:28.660 Removing: /var/run/dpdk/spdk_pid93490 00:29:28.660 Removing: /var/run/dpdk/spdk_pid93503 00:29:28.660 Removing: /var/run/dpdk/spdk_pid93760 00:29:28.660 Removing: /var/run/dpdk/spdk_pid93906 00:29:28.660 Removing: /var/run/dpdk/spdk_pid93923 00:29:28.660 Clean 00:29:28.919 09:31:22 -- common/autotest_common.sh@1453 -- # return 0 00:29:28.919 09:31:22 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:29:28.919 09:31:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.919 09:31:22 -- common/autotest_common.sh@10 -- # set +x 00:29:28.919 09:31:22 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:29:28.919 09:31:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.919 09:31:22 -- common/autotest_common.sh@10 -- # set +x 00:29:28.919 09:31:22 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:28.919 09:31:22 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:28.919 09:31:22 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:28.919 09:31:22 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:29:28.919 09:31:22 -- spdk/autotest.sh@398 -- # hostname 00:29:28.919 09:31:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:29.178 geninfo: WARNING: invalid characters removed from testname! 00:29:55.732 09:31:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:55.732 09:31:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:57.711 09:31:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:00.307 09:31:53 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:02.840 09:31:56 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:05.375 09:31:58 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:07.910 09:32:01 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:07.910 09:32:01 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:07.910 09:32:01 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:07.910 09:32:01 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:07.910 09:32:01 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:07.910 09:32:01 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:07.910 + [[ -n 5248 ]] 00:30:07.910 + sudo kill 5248 00:30:07.920 [Pipeline] } 00:30:07.935 [Pipeline] // timeout 00:30:07.941 [Pipeline] } 00:30:07.955 [Pipeline] // stage 00:30:07.960 [Pipeline] } 00:30:07.974 [Pipeline] // catchError 00:30:07.984 [Pipeline] stage 00:30:07.987 [Pipeline] { (Stop VM) 00:30:07.998 [Pipeline] sh 00:30:08.277 + vagrant halt 00:30:11.563 ==> default: Halting domain... 00:30:18.140 [Pipeline] sh 00:30:18.414 + vagrant destroy -f 00:30:20.948 ==> default: Removing domain... 00:30:21.219 [Pipeline] sh 00:30:21.498 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:21.506 [Pipeline] } 00:30:21.520 [Pipeline] // stage 00:30:21.524 [Pipeline] } 00:30:21.537 [Pipeline] // dir 00:30:21.542 [Pipeline] } 00:30:21.555 [Pipeline] // wrap 00:30:21.560 [Pipeline] } 00:30:21.572 [Pipeline] // catchError 00:30:21.580 [Pipeline] stage 00:30:21.582 [Pipeline] { (Epilogue) 00:30:21.593 [Pipeline] sh 00:30:21.873 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:27.158 [Pipeline] catchError 00:30:27.160 [Pipeline] { 00:30:27.174 [Pipeline] sh 00:30:27.455 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:27.455 Artifacts sizes are good 00:30:27.464 [Pipeline] } 00:30:27.477 [Pipeline] // catchError 00:30:27.487 [Pipeline] archiveArtifacts 00:30:27.494 Archiving artifacts 00:30:27.619 [Pipeline] cleanWs 00:30:27.631 [WS-CLEANUP] Deleting project workspace... 00:30:27.631 [WS-CLEANUP] Deferred wipeout is used... 00:30:27.637 [WS-CLEANUP] done 00:30:27.639 [Pipeline] } 00:30:27.660 [Pipeline] // stage 00:30:27.665 [Pipeline] } 00:30:27.678 [Pipeline] // node 00:30:27.683 [Pipeline] End of Pipeline 00:30:27.721 Finished: SUCCESS